Skip to content

Conversation

@AbanoubGhadban
Copy link
Collaborator

@AbanoubGhadban AbanoubGhadban commented Nov 23, 2025

Problem

We need to run async gem code inside the each_chunk function for the upcoming React on Rails async props implementation. However, this raises the error:

Running scheduler on non-blocking fiber

This occurs because each_chunk executes within a manually created Fiber instead of a Fiber managed by the async gem. The async gem requires that any fiber running async code must be created and managed by the gem itself.

Solution

Replace manual Fiber management with async gem primitives for a unified concurrency model:

  • Async::Barrier: Tracks all component streaming tasks
  • Async::Variable: Synchronizes first chunk delivery
  • Async::LimitedQueue: Single output queue for all components

Architecture Changes

Before (Manual Fibers):

# Controller
@rorp_rendering_fibers = []
render_view
drain_streams_concurrently  # Sync do block here

# Helper
fiber = Fiber.new { stream.each_chunk { |c| Fiber.yield c } }
@rorp_rendering_fibers << fiber
fiber.resume  # First chunk

# Drain
parent.async { loop { fiber.resume } }  # Resume fiber in async task

Problem: Fiber created manually, not by async gem → can't run async code inside

After (Async Primitives):

# Controller
Sync do  # Sync block at START
  @async_barrier = Async::Barrier.new
  @main_output_queue = Async::LimitedQueue.new
  render_view
  drain_streams_concurrently

# Helper
first_chunk_var = Async::Variable.new
@async_barrier.async do  # Async-managed fiber
  stream.each_chunk { |c| ... }  # Can run async code here now!
end
first_chunk_var.wait  # Blocking wait for first chunk

# Drain
@async_barrier.wait  # Wait for all tasks
@main_output_queue.dequeue  # Drain chunks

Solution: All fibers created by async gem → can safely run async code

Code Changes

1. stream_view_containing_react_components (stream.rb)

Changes:

  • ✅ Wrap entire method in Sync do block
  • ✅ Initialize @async_barrier and @main_output_queue before view rendering
  • ✅ Components start async tasks immediately during view rendering

2. run_stream_inside_fiber (helper)

Changes:

  • ✅ Replace Fiber.new with @async_barrier.async
  • ✅ Use Async::Variable for first chunk synchronization
  • ✅ First chunk → variable, remaining chunks → queue
  • ✅ Handle empty stream case

3. drain_streams_concurrently (stream.rb)

Changes:

  • ✅ Simplified from 45 lines to 11 lines
  • ✅ Just: barrier.waitqueue.closedequeue loop
  • ✅ Removed build_producer_tasks and build_writer_task methods

Benefits

✅ Enables Async Props Implementation

Primary Goal: Can now run async gem code inside each_chunk without errors

✅ Unified Concurrency Model

All concurrency handled by async gem primitives - no manual Fiber management

✅ Simpler Code

  • -40 lines total (60 deleted, 20 added)
  • -34 lines in stream.rb
  • Clearer intent and flow

✅ Same Functionality

  • ✅ Concurrent component streaming preserved
  • ✅ Backpressure via LimitedQueue preserved
  • ✅ Error propagation preserved
  • ✅ First chunk synchronous delivery preserved

Testing

  • Run existing streaming integration tests
  • Verify concurrent streaming still works
  • Test with multiple components
  • Test error handling

Why This Approach?

We explored several alternatives:

  1. Keep manual Fibers - Rejected: Can't run async code inside
  2. Refactor each_chunk to be async-native - Rejected: Too large scope
  3. This approach: Start async context before view rendering, use Barrier + Variable

This is the minimal change that enables async code in streaming while maintaining all existing behavior.

🤖 Generated with Claude Code

Co-Authored-By: Claude [email protected]

Summary by CodeRabbit

  • New Features

    • Adds optional on-complete callback for streamed responses to enable post-stream actions (e.g., persisting streamed chunks).
    • First response chunk returned synchronously; remaining chunks delivered asynchronously for faster perceived load.
  • Refactor

    • Streaming rewritten to a barrier-based async flow for more reliable, coordinated streaming and caching.
  • Tests

    • Tests adjusted to use barrier/queue primitives and validate synchronous-first-chunk plus async remainder.

✏️ Tip: You can customize this high-level summary in your review settings.

## Problem

We need to run async gem code inside the `each_chunk` function for the
upcoming React on Rails async props implementation. However, this raises
the error "Running scheduler on non-blocking fiber" because `each_chunk`
executes within a manually created Fiber instead of a Fiber managed by
the async gem.

## Solution

Replace manual Fiber management with async gem primitives:

- **Async::Barrier**: Tracks all component streaming tasks
- **Async::Variable**: Synchronizes first chunk delivery
- **Async::LimitedQueue**: Single output queue for all components

## Changes

1. **stream_view_containing_react_components**:
   - Wrap entire method in `Sync do` block
   - Initialize `@async_barrier` and `@main_output_queue` before rendering
   - Components can now start async tasks during view rendering

2. **run_stream_inside_fiber**:
   - Replace `Fiber.new` with `@async_barrier.async`
   - Use `Async::Variable` to wait for first chunk synchronously
   - First chunk stored in variable, remaining chunks enqueued to main queue

3. **drain_streams_concurrently**:
   - Simplified from 45 lines to 11 lines
   - Just wait on barrier, close queue, drain chunks
   - Removed `build_producer_tasks` and `build_writer_task` methods

## Benefits

- ✅ Unified concurrency model (all async gem, no manual Fibers)
- ✅ Enables async gem code to run inside component streaming
- ✅ Simpler code (-40 lines total)
- ✅ Same functionality: concurrent streaming, backpressure, error handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 23, 2025

Walkthrough

Switches streaming from fiber-yield sequencing to an Async::Barrier + main output queue model: returns the first chunk synchronously via an Async::Variable, streams remaining chunks asynchronously into a main queue, and supports an optional on_complete callback to receive collected chunks (used for post-stream caching).

Changes

Cohort / File(s) Summary
Helper: first-chunk, on_complete & caching
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
Adds on_complete support; updates run_stream_inside_fiber signature to accept on_complete:; introduces first_chunk_var for synchronous first-chunk return, collects subsequent chunks for async enqueue, and wires on_complete to persist buffered chunks for cache-miss handling.
Barrier-based drain & queueing
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
Replaces per-fiber producer/writer draining with a single Async::Barrier and @main_output_queue; spawns async tasks to drain component queues into the main queue, renders template head before draining, waits on the barrier, then drains the main queue to response.stream and closes the stream.
Specs: enqueue-based streaming simulation
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb, react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
Tests updated to use Async::Barrier and Async::Queue in Sync blocks; simulate async enqueue into @main_output_queue, collect initial synchronous chunk and drain queued subsequent chunks after barrier coordination; adjust expectations for cache hit/miss flows using on_complete.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant Helper as run_stream_inside_fiber(on_complete:)
    participant Var as Async::Variable\n(first_chunk_var)
    participant Barrier as Async::Barrier
    participant ComponentTasks as Component\nasync tasks
    participant MainQ as Main Output\nQueue
    participant Response as response.stream
    participant onComplete as on_complete (optional)

    Caller->>Helper: start streaming (opts, maybe on_complete)
    Helper->>Var: create first_chunk_var
    Helper->>Barrier: create barrier
    Helper->>Response: render template head & write initial head
    par component enqueue (concurrent)
        ComponentTasks->>MainQ: enqueue subsequent chunks
        ComponentTasks->>Var: deliver first chunk (if produced)
    and synchronous first-chunk return
        Var-->>Helper: first chunk value (or nil)
        Helper-->>Caller: return first chunk synchronously
    end
    Helper->>Barrier: wait for async component tasks to finish
    MainQ->>Response: write enqueued subsequent chunks
    Helper->>onComplete: invoke with collected chunks (if provided)
    Helper->>Response: close stream
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

  • Inspect race conditions around Async::Variable (first_chunk_var) and barrier/task coordination.
  • Verify cache write atomicity in on_complete callback and correctness of collected-chunks buffering.
  • Confirm error/exception handling ensures stream closure and avoids deadlocks.

Possibly related issues

  • Issue #550 — Matches: implements a Promise.any-style concurrent streaming model using an Async::Barrier/main output queue as requested.

Possibly related PRs

  • PR #2015 — Strong relation: similar refactor of stream draining from fiber-based yielding to an Async-based concurrent drain/queue model.

Suggested reviewers

  • justin808

Poem

🐇 I hopped through code with careful paws,
The first nibble returns while the rest applause.
Barriers hum and queues line the trail,
Chunks gather, persist — a caching tale.
🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 35.71% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: replacing manual Fibers with Async gem primitives for component streaming, which is the core objective of the PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch replace-manual-fibers-with-async-primitives

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

Summary

This PR successfully migrates from manual Fiber management to async gem primitives (Barrier, Variable, LimitedQueue) to enable async code execution within each_chunk for the upcoming async props feature. The changes are well-architected and maintain all existing functionality while simplifying the codebase.

✅ Strengths

1. Clear Problem Statement & Architecture

The PR description excellently documents:

  • The root cause (manual Fibers incompatible with async gem)
  • Before/after architecture comparison
  • Why this approach was chosen over alternatives

2. Simplified Code (-40 lines net)

  • drain_streams_concurrently reduced from 45 lines to 11 lines
  • Removed build_producer_tasks and build_writer_task helper methods
  • Clearer intent with async primitives

3. Correct Async Patterns

The use of async primitives is well-designed:

  • Sync do wraps the entire operation correctly
  • Async::Barrier properly tracks all component tasks
  • Async::Variable elegantly handles first-chunk synchronization
  • LimitedQueue maintains backpressure

⚠️ Issues & Concerns

1. CRITICAL: Breaking Test Compatibility

Location: react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb:18-27 and react_on_rails_pro_helper_spec.rb:364-370

The tests are still using the old manual Fiber pattern:

# stream_spec.rb:18-27
@rorp_rendering_fibers = @component_queues.map do |queue|
  Fiber.new do
    loop do
      chunk = queue.dequeue
      break if chunk.nil?
      Fiber.yield chunk
    end
  end
end
# react_on_rails_pro_helper_spec.rb:369
@rorp_rendering_fibers = []

Problem: After your changes, the code expects:

  • @async_barrier (not @rorp_rendering_fibers)
  • @main_output_queue to be initialized

Impact: These tests will fail because:

  1. stream_spec.rb creates manual Fibers, but drain_streams_concurrently now calls @async_barrier.wait (undefined)
  2. Helper specs initialize @rorp_rendering_fibers = [], but run_stream_inside_fiber checks if @async_barrier.nil?

Required Fixes:

For stream_spec.rb (lines 7-30):

class StreamController
  include ReactOnRailsPro::Stream
  
  attr_reader :response

  def initialize(component_queues:, initial_response: "TEMPLATE")
    @component_queues = component_queues
    @initial_response = initial_response
  end

  def render_to_string(**_opts)
    # Initialize async primitives like the real implementation
    require "async/barrier"
    require "async/limited_queue"
    
    @async_barrier = Async::Barrier.new
    buffer_size = ReactOnRailsPro.configuration.concurrent_component_streaming_buffer_size
    @main_output_queue = Async::LimitedQueue.new(buffer_size)
    
    # Spawn async tasks for each component queue
    @component_queues.each do |queue|
      @async_barrier.async do
        loop do
          chunk = queue.dequeue
          break if chunk.nil?
          @main_output_queue.enqueue(chunk)
        end
      end
    end

    @initial_response
  end
end

For react_on_rails_pro_helper_spec.rb (lines 364-370 & 701):

# Before each streaming test:
before do
  # Initialize async primitives to mock stream_view_containing_react_components behavior
  require "async/barrier"
  require "async/limited_queue"
  
  @async_barrier = Async::Barrier.new
  buffer_size = ReactOnRailsPro.configuration.concurrent_component_streaming_buffer_size || 10
  @main_output_queue = Async::LimitedQueue.new(buffer_size)
end

# In cached streaming tests (line 701), replace:
@rorp_rendering_fibers = []
# with:
require "async/barrier"
require "async/limited_queue"
@async_barrier = Async::Barrier.new
@main_output_queue = Async::LimitedQueue.new(10)

2. Error Handling Gap

Location: react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb:305-327

Issue: If the async task fails after setting first chunk but before completing, first_chunk_var.value might not be set, causing first_chunk_var.wait to hang indefinitely.

Recommendation:

@async_barrier.async do
  stream = yield
  is_first = true

  begin
    stream.each_chunk do |chunk|
      if is_first
        first_chunk_var.value = chunk
        is_first = false
      else
        @main_output_queue.enqueue(chunk)
      end
    end
  rescue => e
    # Ensure first_chunk_var is always set to unblock the caller
    first_chunk_var.value = nil if is_first
    raise
  end

  # Handle case where stream has no chunks
  first_chunk_var.value = nil if is_first
end

3. Empty Stream Edge Case

Current code (line 322):

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

This is correct but happens inside the async task. If stream.each_chunk raises an exception immediately (e.g., connection error), is_first remains true but line 322 never executes.

Solution: Same as error handling above - wrap in rescue to guarantee first_chunk_var is set.

4. Missing Documentation

The code lacks inline comments explaining:

  • Why first_chunk_var is needed (synchronous first chunk delivery requirement)
  • Why remaining chunks go to @main_output_queue (for concurrent streaming)
  • The invariant that @async_barrier and @main_output_queue must be initialized before helper methods

Suggestion:

def run_stream_inside_fiber
  require "async/variable"

  if @async_barrier.nil?
    raise ReactOnRails::Error,
          "You must call stream_view_containing_react_components to render the view containing the react component"
  end

  # We need to return the first chunk synchronously (for Rails to render the initial HTML)
  # while streaming remaining chunks asynchronously to the response stream.
  # Async::Variable provides blocking wait semantics for this first chunk.
  first_chunk_var = Async::Variable.new

  # Start an async task to stream all chunks
  @async_barrier.async do
    # ... rest of implementation
  end
  
  # Block until first chunk is available, then return it synchronously
  first_chunk_var.wait
  first_chunk_var.value
end

📝 Minor Issues

5. Inconsistent Require Statements

Observation:

  • stream.rb:34-36 - requires at method start
  • helper.rb:293 - requires inside method

Recommendation: Move all require statements to the top of the file or module for consistency.

6. Variable Naming

is_first is clear, but consider first_chunk_sent or first_chunk_delivered to be more explicit about what state is being tracked.

🧪 Testing Recommendations

MANDATORY (as per CLAUDE.md):

From the project guidelines:

NEVER claim a test is "fixed" without running it locally first

You MUST run these tests locally before marking as complete:

# Test the specific streaming functionality
bundle exec rspec react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
bundle exec rspec react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb

# Run full Pro test suite
cd react_on_rails_pro && bundle exec rspec

The PR description shows:

  • Run existing streaming integration tests
  • Verify concurrent streaming still works
  • Test with multiple components
  • Test error handling

None of these are checked. Per CLAUDE.md, you must complete local testing before claiming fixes.

Additional Test Cases Needed:

  1. Error propagation test: Verify that if stream.each_chunk raises an exception, it propagates correctly and doesn't hang
  2. Empty stream test: Verify behavior when each_chunk yields zero chunks
  3. Single chunk test: Verify behavior when stream has exactly one chunk
  4. Concurrent ordering: Verify that multiple components' chunks are interleaved correctly

🔒 Security Considerations

No security issues identified. The changes maintain the same execution model with different primitives.

📊 Performance Considerations

Neutral to positive:

  • Async::Variable.wait is efficient (reactor-based blocking)
  • LimitedQueue maintains backpressure characteristics
  • Barrier pattern may have slightly lower overhead than manual task coordination

Benchmark recommendation: If possible, compare streaming latency for:

  • Time to first chunk
  • Total streaming time for N components
  • Memory usage during streaming

🎯 Action Items

Before Merging:

  1. Fix test compatibility (Critical - tests will fail)

    • Update stream_spec.rb StreamController mock
    • Update all helper spec setups to use new primitives
  2. ⚠️ Add error handling to run_stream_inside_fiber (High priority)

    • Ensure first_chunk_var is always set, even on error
  3. 📝 Add documentation comments (Medium priority)

    • Explain first chunk synchronization requirement
    • Document initialization invariants
  4. Run all tests locally (Mandatory per CLAUDE.md)

    bundle exec rspec react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
    bundle exec rspec react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
    cd react_on_rails_pro && bundle exec rake
  5. 📋 Update PR checklist - Check off completed items

Nice to Have:

  1. Move require statements to module/class top
  2. Consider more descriptive variable names (first_chunk_sent vs is_first)
  3. Add performance benchmarks comparing old vs new approach

📖 References


Verdict

Conditionally Approve pending:

  1. Test compatibility fixes (blocking)
  2. Error handling improvements (blocking)
  3. Local test verification (mandatory per CLAUDE.md)

The architectural approach is sound and well-reasoned. The code quality is good once test compatibility is addressed. Excellent PR description and rationale.

Estimated time to address: 1-2 hours for fixes + testing

Great work on the refactoring! The async primitives approach is definitely the right direction. 🚀

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86520a9 and c4d570f.

📒 Files selected for processing (2)
  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1 hunks)
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-04-26T21:55:55.874Z
Learnt from: alexeyr-ci2
Repo: shakacode/react_on_rails PR: 1732
File: spec/dummy/client/app-react16/startup/ReduxSharedStoreApp.client.jsx:40-44
Timestamp: 2025-04-26T21:55:55.874Z
Learning: In the react_on_rails project, files under `app-react16` directories are copied/moved to corresponding `/app` directories during the conversion process (removing the `-react16` suffix), which affects their relative import paths at runtime.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-09-16T08:01:11.146Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1770
File: lib/generators/react_on_rails/templates/base/base/app/javascript/src/HelloWorld/ror_components/HelloWorld.client.jsx:2-2
Timestamp: 2025-09-16T08:01:11.146Z
Learning: React on Rails uses webpack CSS Modules configuration with namedExports: true, which requires the import syntax `import * as style from './file.module.css'` rather than the default export pattern. This configuration enables better tree shaking and bundle size optimization for CSS modules.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-02-12T16:38:06.537Z
Learnt from: Romex91
Repo: shakacode/react_on_rails PR: 1697
File: package-scripts.yml:28-28
Timestamp: 2025-02-12T16:38:06.537Z
Learning: The file `node_package/lib/ReactOnRails.full.js` is autogenerated during the build process and should not be present in the repository.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: theforestvn88
Repo: shakacode/react_on_rails PR: 1620
File: spec/dummy/client/app/startup/HelloTurboStream.jsx:3-3
Timestamp: 2024-10-08T20:53:47.076Z
Learning: The `RailsContext` import in `spec/dummy/client/app/startup/HelloTurboStream.jsx` is used later in the project, as clarified by the user theforestvn88.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.

Applied to files:

  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
🧬 Code graph analysis (2)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)
packages/react-on-rails-pro/tests/AsyncQueue.ts (1)
  • enqueue (10-20)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (4)
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (1)
  • configuration (9-38)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (2)
  • render_to_string (14-14)
  • response (15-15)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
  • render_to_string (17-30)
packages/react-on-rails-pro/tests/AsyncQueue.ts (1)
  • dequeue (27-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: examples (3.4, latest)
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: pro-lint-js-and-ruby
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: claude-review
🔇 Additional comments (2)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (2)

38-49: LGTM! Clean initialization and early template rendering.

The refactor correctly initializes the async primitives before template rendering and immediately writes the template to the response stream. This ensures components can start streaming under the barrier as soon as they're encountered during rendering.


69-71: The review comment is incorrect; the code correctly handles queue closure.

When an Async::LimitedQueue is closed, dequeue returns nil once the queue is closed and empty, which is exactly what the code expects. The while loop condition while (chunk = @main_output_queue.dequeue) exits gracefully when dequeue returns nil, a falsy value. This pattern is also validated in the test files (stream_spec.rb line 21-22), which use the same dequeuebreak if nil? pattern, confirming the expected behavior.

Likely an incorrect or invalid review comment.

AbanoubGhadban and others added 2 commits November 23, 2025 19:45
Previously, chunks were written to the response stream only after all
component streaming tasks completed. Now the writer task starts
immediately and writes chunks concurrently as they arrive in the queue.

This improves streaming performance by eliminating the delay between
chunk production and delivery to the client.

Changes:
- Start writer task before waiting for barrier
- Writer dequeues and writes chunks concurrently with producers
- Added debug logging for development troubleshooting
- Disabled prerender_caching in dummy config for testing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Replace Fiber-based caching with async gem primitives for both cache
hit replay and cache miss buffering.

Changes:

1. **Cache Hit Replay** (handle_stream_cache_hit):
   - Replace Fiber.new with @async_barrier.async task
   - Use Async::Variable for first chunk synchronization
   - Enqueue cached chunks to main output queue

2. **Cache Miss Buffering** (handle_stream_cache_miss):
   - Set @rorp_cache_buffering flag before streaming
   - Normal streaming buffers chunks automatically
   - Simplified from nested fiber wrapping to simple flag

3. **run_stream_inside_fiber**:
   - Check for @rorp_cache_buffering flag
   - Buffer chunks if caching enabled
   - Write to Rails.cache after streaming completes

Benefits:
- ✅ Caching works with async primitives
- ✅ No manual Fiber management
- ✅ Simpler cache miss logic (no fiber wrapping)
- ✅ Cache hit replay uses same pattern as normal streaming

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace Manual Fibers with Async Gem Primitives

Overall Assessment

This is a well-architected refactor that successfully achieves its goal of enabling async gem code execution within component streaming. The approach is solid, the code is cleaner, and the architectural shift from manual Fiber management to async primitives is the right solution for the stated problem.


✅ Strengths

1. Clear Problem Statement & Solution

  • The PR description excellently explains the "Running scheduler on non-blocking fiber" error
  • The before/after architecture diagrams make the change easy to understand
  • The rationale for choosing this approach over alternatives is well-documented

2. Code Simplification

  • -40 lines net reduction (60 deleted, 20 added)
  • Eliminated build_producer_tasks and build_writer_task methods
  • drain_streams_concurrently reduced from 45 lines to 11 lines
  • More declarative code using async primitives vs. manual orchestration

3. Architectural Improvements

  • Unified concurrency model: All fiber management now handled by async gem
  • Clearer intent: Async::Barrier and Async::Variable are self-documenting
  • Better separation of concerns: Component streaming vs. output writing are clearly separated

4. Preserved Functionality

  • ✅ Concurrent component streaming maintained
  • ✅ Backpressure via LimitedQueue preserved
  • ✅ First chunk synchronous delivery preserved (critical for initial render)
  • ✅ Error propagation preserved (via barrier.wait)

⚠️ Potential Issues & Concerns

1. CRITICAL: Test Suite Needs Updating

The test file stream_spec.rb still uses the old manual Fiber approach:

# stream_spec.rb line 18-27
def render_to_string(**_opts)
  @rorp_rendering_fibers = @component_queues.map do |queue|
    Fiber.new do
      loop do
        chunk = queue.dequeue
        break if chunk.nil?
        Fiber.yield chunk
      end
    end
  end
  @initial_response
end

This mock needs to be updated to use the new async primitives:

  • Replace @rorp_rendering_fibers with @async_barrier and @main_output_queue
  • Update StreamController to match the new helper behavior

Action Required: Update the test mock to align with the new implementation, or the tests won't be testing the actual code paths.


2. Potential Race Condition: Empty Stream Handling

In react_on_rails_pro_helper.rb:321-322:

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Concern: This sets the variable AFTER the async task completes. If stream.each_chunk never yields (empty stream), there's a window where first_chunk_var.wait (line 325) could block indefinitely if the async task hasn't completed yet.

Potential fix: Set a default value before starting the async task:

first_chunk_var = Async::Variable.new
first_chunk_var.value = nil  # Default to nil BEFORE async task starts

@async_barrier.async do
  stream = yield
  is_first = true
  stream.each_chunk do |chunk|
    if is_first
      first_chunk_var.value = chunk  # Override default with first chunk
      is_first = false
    else
      @main_output_queue.enqueue(chunk)
    end
  end
  # No need to set nil here, already set above
end

OR verify that Async::Variable.new already handles this case correctly (may need to check async gem documentation).


3. Documentation: Sync Block Placement is Critical

The PR description mentions:

Wrap entire method in Sync do block

Why this matters: The Sync do block establishes the async reactor that all @async_barrier.async calls depend on. If someone refactors this code later and moves the Sync do block, everything breaks.

Suggestion: Add a code comment explaining this dependency:

# CRITICAL: Sync block must wrap ENTIRE method including render_to_string
# Components start async tasks during view rendering via @async_barrier.async
# These tasks REQUIRE the Sync reactor to be active
Sync do |parent_task|
  @async_barrier = Async::Barrier.new
  # ...

4. Performance: First Chunk Blocking Behavior

# Wait for and return the first chunk (blocking)
first_chunk_var.wait
first_chunk_var.value

Question: Is this blocking wait acceptable in all scenarios?

  • In the old implementation, fiber.resume was also blocking for the first chunk
  • So behavior is preserved, not degraded
  • But worth documenting WHY the first chunk must be synchronous (likely for SSR hydration)

Suggestion: Add comment explaining the blocking behavior:

# Wait for first chunk (blocking) - required for synchronous SSR hydration
# Remaining chunks stream asynchronously via @main_output_queue
first_chunk_var.wait

5. Error Handling: What Happens on Exception in each_chunk?

If stream.each_chunk raises an exception:

@async_barrier.async do
  stream = yield
  stream.each_chunk do |chunk|  # <-- Exception here?
    # ...
  end
end

Expected behavior:

  • @async_barrier.wait in drain_streams_concurrently should propagate the exception
  • This matches the old behavior where exceptions would bubble up

Verification needed: Ensure existing error handling tests cover this case. The old tests may have verified this with manual Fibers, but may not cover async task failures.


6. Queue Cleanup: What if a Component Hangs?

If a component's each_chunk never completes:

@async_barrier.wait  # <-- Could block forever

Mitigation: The async gem likely has timeout mechanisms, but this should be verified.

Suggestion: Consider adding a timeout to the barrier wait if long-running components could be an issue:

# May need something like:
Async::Task.current.with_timeout(30) do
  @async_barrier.wait
end

OR document that this is expected behavior (components MUST complete or fail).


📋 Testing Recommendations

Must Test Before Merge:

  1. Update StreamController mock in stream_spec.rb to use new async primitives
  2. Run all existing streaming tests to verify preserved functionality
  3. Test empty stream scenario - verify first_chunk_var.value = nil works correctly
  4. Test error handling - verify exceptions in each_chunk propagate correctly
  5. Test concurrent streaming - verify multiple components still stream in parallel
  6. Test backpressure - verify LimitedQueue still applies backpressure

Should Add Tests:

  1. Test async code execution inside each_chunk (the primary goal of this PR)
  2. Test barrier timeout behavior if a component hangs
  3. Test queue cleanup on error scenarios

🔒 Security Considerations

No security concerns identified. The changes are internal refactoring of concurrency primitives and don't expose new attack surfaces.


⚡ Performance Considerations

Potential Improvements:

  • Memory: Using Async::Variable for first chunk may be more efficient than Fiber stack overhead
  • CPU: Barrier-based coordination likely has lower overhead than manual fiber resumption

Potential Concerns:

  • Blocking behavior: first_chunk_var.wait is blocking, but this matches old behavior
  • Queue size: Still controlled by concurrent_component_streaming_buffer_size config

Overall: Performance should be equivalent or better than manual Fiber approach.


📚 Code Quality & Best Practices

Excellent:

  • ✅ Clear variable names (first_chunk_var, @async_barrier, @main_output_queue)
  • ✅ Self-documenting code with async primitives
  • ✅ Proper cleanup with ensure blocks
  • ✅ Follows async gem best practices

Could Improve:

  • ⚠️ Add comments explaining WHY Sync do must wrap render (critical for maintainability)
  • ⚠️ Add comments explaining first chunk blocking behavior
  • ⚠️ Consider adding timeout handling for stuck components

🎯 Final Recommendations

Before Merge:

  1. MUST: Update StreamController mock in stream_spec.rb to match new implementation
  2. MUST: Run full test suite and verify all streaming tests pass
  3. MUST: Test empty stream scenario manually or add explicit test
  4. SHOULD: Add code comments explaining critical Sync do block placement
  5. SHOULD: Add code comments explaining first chunk blocking behavior
  6. SHOULD: Verify error propagation with updated tests

Nice to Have:

  • Add explicit test for async code execution inside each_chunk (demonstrates the fix)
  • Document timeout behavior for stuck components
  • Add migration guide if this affects Pro package users

Summary

This is a high-quality refactor that achieves its stated goal with cleaner, more maintainable code. The architectural shift from manual Fibers to async primitives is the correct approach.

Main blockers:

  1. Test suite needs updating to match new implementation
  2. Empty stream race condition needs verification/fix
  3. Critical code sections need documentation comments

Once these are addressed, this PR is ready to merge. Great work on simplifying complex concurrent code! 🚀

The StreamController test class was still using the old Fiber-based
approach. Updated it to use the new async primitives:

- Replace @rorp_rendering_fibers with @async_barrier tasks
- Enqueue all chunks to @main_output_queue
- Add require for Async::Variable
- Simplify test controller to just create barrier tasks

All streaming tests now pass (20 examples, 0 failures).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

I've completed a thorough review of this PR. Overall, this is a well-architected refactoring that achieves its primary goal of enabling async gem code within component streaming. Here's my detailed feedback:


Strengths

1. Clear Architecture & Motivation

  • The PR description excellently explains the "why" (enabling async props) and the "how" (replacing manual Fibers with async primitives)
  • The before/after comparison makes the changes easy to understand
  • The trade-off analysis shows thoughtful consideration of alternatives

2. Code Simplification

  • -40 net lines of code with improved clarity
  • drain_streams_concurrently reduced from 45 → 11 lines
  • Eliminated complex helper methods (build_producer_tasks, build_writer_task)
  • More idiomatic use of async gem patterns

3. Unified Concurrency Model

  • All fibers now managed by async gem (via Async::Barrier.async)
  • Consistent use of async primitives throughout
  • Easier to reason about concurrent behavior

🔍 Issues & Concerns

1. ⚠️ CRITICAL: Test Suite Needs Updating

Location: react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb:18

The test suite still uses the old manual Fiber approach:

@rorp_rendering_fibers = @component_queues.map do |queue|
  Fiber.new do
    loop do
      chunk = queue.dequeue
      break if chunk.nil?
      Fiber.yield chunk
    end
  end
end

This is problematic because:

  • Tests are validating the old implementation, not the new async-based one
  • The test suite may pass while missing bugs in the new async primitives
  • Tests won't catch issues with @async_barrier, @main_output_queue, or Async::Variable

Required Actions:

  1. Update StreamController in stream_spec.rb to use Async::Barrier and Async::LimitedQueue
  2. Replace manual Fiber.new with barrier.async blocks
  3. Update render_to_string to initialize @async_barrier and @main_output_queue
  4. Use Async::Variable for first chunk synchronization in tests
  5. Verify all streaming tests still pass with the new implementation

Example of what needs changing:

# OLD (current test code)
@rorp_rendering_fibers = @component_queues.map { |q| Fiber.new { ... } }

# NEW (should match actual implementation)
@async_barrier = Async::Barrier.new
@main_output_queue = Async::LimitedQueue.new(buffer_size)
@component_queues.each do |queue|
  @async_barrier.async { queue.each { |chunk| @main_output_queue.enqueue(chunk) } }
end

2. 🐛 Potential Race Condition in Cache Hit Replay

Location: react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb:254-262

@async_barrier.async do
  # Set first chunk (already have it from cache)
  first_chunk_var.value = initial_result  # ← Async task
  # Enqueue remaining cached chunks
  rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
end

# Wait for and return first chunk
first_chunk_var.wait  # ← Main fiber
first_chunk_var.value

Issue: There's a potential race where first_chunk_var.wait is called before the async task sets first_chunk_var.value.

Why this might work: Async::Variable.wait likely blocks until a value is set, so this may be intentional. However:

  • The pattern is inconsistent with handle_stream_cache_miss which doesn't use Variable at all
  • The comment "already have it from cache" suggests we don't need async for the first chunk
  • This adds unnecessary async overhead for a synchronous operation

Recommendation:

# Option 1: Return first chunk synchronously (simpler, more efficient)
@async_barrier.async do
  rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
end
initial_result  # Just return it directly

# Option 2: If you need Variable for consistency, document the blocking behavior
# and ensure `Async::Variable.wait` has timeout protection

3. 📝 Missing Documentation: Empty Stream Handling

Location: react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb:331

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Questions:

  • What happens when a component has no chunks? Is this a valid scenario?
  • How does the rest of the system handle a nil first chunk?
  • Should this be an error condition instead?

Recommendation: Add a comment explaining when this occurs and how it's handled downstream.


4. 🔒 Thread Safety: Instance Variable Access

Location: Multiple locations

cache_config = @rorp_cache_buffering  # ← Read from main fiber
@async_barrier.async do
  cache_config[:chunks] << chunk if cache_config  # ← Write from async task
end

Potential Issue: @rorp_cache_buffering is set in the main fiber but accessed/modified from async tasks. While Ruby's GIL provides some protection, this pattern can be fragile.

Recommendation:

  • Document that @rorp_cache_buffering is set before async tasks start and cleared after they finish
  • Consider using Async::Variable or other async-safe primitives if this pattern evolves
  • Add a comment explaining the threading assumptions

5. ⚠️ Error Handling: Barrier Wait Exceptions

Location: react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb:69

# Wait for all component streaming tasks to complete
@async_barrier.wait

Question: What happens if a component streaming task raises an exception?

  • Does barrier.wait propagate the exception?
  • Will queue.close still be called in the ensure block?
  • Is there a risk of the response stream not being closed?

From the code, I see an ensure block wrapping this, which is good! But:

Recommendation: Add a comment documenting the error propagation behavior and ensure block protection.


6. 🎯 Performance: Writer Task Timing

Location: Commit message claims writer starts "immediately", but code shows:

writing_task = parent_task.async do
  while (chunk = @main_output_queue.dequeue)
    response.stream.write(chunk)
  end
end

# Wait for all component streaming tasks to complete
@async_barrier.wait

Observation: This is correct - writer task starts immediately and drains concurrently. However:

  • The writer will block on dequeue until producers enqueue chunks
  • @async_barrier.wait ensures all producers finish
  • queue.close signals the writer to exit

This is good architecture! But ensure integration tests verify concurrent writing behavior (not just sequential).


🧪 Testing Concerns

Missing Test Coverage:

  1. Caching with async primitives

    • Does handle_stream_cache_hit work with the new Variable approach?
    • Does @rorp_cache_buffering correctly capture chunks from async tasks?
  2. Error scenarios

    • Component streaming task raises exception
    • Empty component streams (the first_chunk_var.value = nil case)
    • Queue full scenarios (backpressure)
  3. Concurrent streaming performance

    • Does the writer actually write chunks while producers are still producing?
    • The existing test in stream_spec.rb checks this, but needs updating to match new implementation

Per CLAUDE.md guidelines:

CRITICAL - LOCAL TESTING REQUIREMENTS:

  1. NEVER claim a test is "fixed" without running it locally first

The PR description has an empty testing checklist:

- [ ] Run existing streaming integration tests
- [ ] Verify concurrent streaming still works
- [ ] Test with multiple components
- [ ] Test error handling

Please complete this checklist before merging:

# Run Pro-specific tests
cd react_on_rails_pro
bundle exec rspec spec/react_on_rails_pro/stream_spec.rb
bundle exec rspec spec/react_on_rails_pro/cache_spec.rb

# Run integration tests (if available)
# Check for Playwright/E2E tests covering streaming
yarn test:e2e

📋 Code Quality & Best Practices

✅ Good:

  • Clear variable naming (first_chunk_var, @main_output_queue, @async_barrier)
  • Consistent use of async primitives
  • Proper cleanup (queue.close, stream close in ensure block)
  • Configuration validation for buffer size

🔧 Suggestions:

  1. Add RBS type signatures for new methods (per CLAUDE.md RBS guidelines)
  2. Rubocop: Ensure bundle exec rubocop passes (CLAUDE.md requirement)
  3. Comments: Add inline comments explaining:
    • Why Async::Variable is used for first chunk
    • Thread safety assumptions for @rorp_cache_buffering
    • Error propagation behavior from barrier tasks

🔐 Security Considerations

No major security concerns, but:

  1. DoS via queue exhaustion: The buffer_size config (default 64) prevents unbounded memory growth. ✅
  2. Exception leakage: Ensure exceptions from component rendering don't leak sensitive info to clients
  3. Cache poisoning: Caching chunks to Rails.cache - ensure cache keys are properly namespaced (looks good)

📚 Documentation & CHANGELOG

Per CLAUDE.md:

Pro-only features/fixes → Update /CHANGELOG_PRO.md

Action Required: Add an entry to CHANGELOG_PRO.md:

#### [Unreleased]

##### Improved
* [PR 2111](https://github.com/shakacode/react_on_rails/pull/2111) by [AbanoubGhadban](https://github.com/AbanoubGhadban): Refactored component streaming to use Async gem primitives instead of manual Fiber management, enabling async props support and simplifying concurrency model. by [AbanoubGhadban](https://github.com/AbanoubGhadban)

🚀 Deployment Considerations

  1. Breaking changes? No - the public API remains unchanged ✅
  2. Performance impact? Should be neutral or better (less manual fiber management overhead)
  3. Rollback plan? Straightforward - revert to previous commit
  4. Async gem dependency? Already a dependency - no new requirements ✅

📊 Final Verdict

Overall Assessment: Approve with Required Changes ⚠️

This is a solid refactoring that achieves its stated goals, but has one critical blocker:

MUST FIX Before Merge:

  1. Update test suite in stream_spec.rb to use async primitives instead of manual Fibers

SHOULD FIX Before Merge:

  1. ⚠️ Review cache hit replay logic (race condition/unnecessary async)
  2. ⚠️ Add CHANGELOG_PRO.md entry
  3. ⚠️ Complete testing checklist

NICE TO HAVE:

  1. 💡 Add inline comments for thread safety and error handling assumptions
  2. 💡 Add RBS type signatures for new methods
  3. 💡 Document empty stream handling behavior

Action Items Summary

  1. Update stream_spec.rb - Replace manual Fibers with Async::Barrier/Async::Variable (CRITICAL)
  2. Run full test suite locally - Complete testing checklist in PR description
  3. Add CHANGELOG_PRO.md entry - Document this improvement
  4. Review cache hit replay - Consider simplifying first chunk handling
  5. Run bundle exec rubocop - Ensure no violations (per CLAUDE.md)
  6. Add inline documentation - Thread safety, error handling, empty stream cases

Once these are addressed (especially #1), this will be ready to merge! 🎉

Great work on simplifying the concurrency model and enabling async props support! 👏

@claude
Copy link

claude bot commented Nov 23, 2025

PR Review: Replace manual Fibers with Async gem primitives

I've reviewed this PR and it represents a well-thought-out architectural improvement. Here's my detailed feedback:


Strengths

1. Clear Problem Statement & Solution

The PR description excellently explains:

  • Why manual Fibers don't work (can't run async code inside)
  • What the solution is (use async gem primitives)
  • How the architecture changed (with clear before/after diagrams)

2. Simplified Code Structure

  • -40 lines net reduction demonstrates this is a genuine simplification
  • drain_streams_concurrently reduced from 45 lines to 11 lines (76% reduction)
  • Eliminated build_producer_tasks and build_writer_task methods - cleaner abstraction

3. Unified Concurrency Model

Replacing manual Fiber management with async gem primitives (Async::Barrier, Async::Variable, Async::LimitedQueue) creates a single, consistent approach to concurrency.

4. Test Coverage Updated

The stream_spec.rb changes correctly reflect the new architecture while maintaining the same test scenarios.


⚠️ Concerns & Suggestions

1. Instance Variable Lifecycle (Critical)

Issue: @async_barrier and @main_output_queue are instance variables on the controller/helper, created inside Sync do block:

# stream.rb:38-42
Sync do |parent_task|
  @async_barrier = Async::Barrier.new
  @main_output_queue = Async::LimitedQueue.new(buffer_size)
  # ...
end

Then accessed in helper methods:

# helper.rb:254, 302-304
@async_barrier.async do
  # ...
end

if @async_barrier.nil?
  raise ReactOnRails::Error, "You must call stream_view_containing_react_components..."
end

Concerns:

  1. Memory leaks: These instance variables persist after the Sync block completes. They hold references to async primitives that should be garbage collected.
  2. State pollution: If the same controller instance handles multiple requests (e.g., in threading scenarios), these variables could leak state between requests.
  3. Nil checks are brittle: The helper relies on @async_barrier.nil? to detect misuse, but this check won't catch stale references from previous requests.

Recommendation:
Consider passing these as parameters instead of instance variables:

def stream_view_containing_react_components(template:, close_stream_at_end: true, **render_options)
  Sync do |parent_task|
    barrier = Async::Barrier.new
    output_queue = Async::LimitedQueue.new(buffer_size)
    
    streaming_context = { barrier: barrier, queue: output_queue }
    template_string = render_to_string(template: template, locals: { streaming_context: streaming_context }, **render_options)
    # ...
  end
end

Or use ensure to clean up:

Sync do |parent_task|
  @async_barrier = Async::Barrier.new
  @main_output_queue = Async::LimitedQueue.new(buffer_size)
  # ...
ensure
  @async_barrier = nil
  @main_output_queue = nil
end

2. Error Handling Gap

Missing: What happens if an async task in the barrier raises an exception?

# helper.rb:314-339
@async_barrier.async do
  stream = yield  # Could raise
  stream.each_chunk do |chunk|  # Could raise
    # ...
  end
end

Current behavior: The barrier will propagate the error when barrier.wait is called (line 70 in stream.rb), but:

  1. The first_chunk_var.wait in helper.rb:342 might hang indefinitely if the task fails before setting the first chunk
  2. No cleanup happens for the output queue

Recommendation:
Add error handling to ensure first_chunk_var is always set (even to nil/error):

@async_barrier.async do
  begin
    stream = yield
    is_first = true
    
    stream.each_chunk do |chunk|
      # ... existing logic
    end
    
    first_chunk_var.value = nil if is_first
  rescue => e
    # Ensure first_chunk_var is set even on error
    first_chunk_var.value = nil if is_first
    raise  # Re-raise to propagate to barrier.wait
  end
end

3. Potential Race Condition in Cache Hit Path

In handle_stream_cache_hit (helper.rb:254-264):

@async_barrier.async do
  first_chunk_var.value = initial_result  # Line 256
  rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }  # Line 259
end

first_chunk_var.wait  # Line 263
first_chunk_var.value  # Line 264

Issue: There's a race condition window:

  1. The async task sets first_chunk_var.value (line 256)
  2. The async task enqueues remaining chunks (line 259)
  3. The main thread waits (line 263) and reads (line 264)

If the async task is very slow to start, the wait on line 263 might complete before line 259 executes, potentially causing chunks to be enqueued after the drain starts or in unexpected order.

Impact: Low likelihood in practice (async task should start quickly), but worth documenting or adding a comment.

4. Test Spec Doesn't Match Real Implementation

The test's render_to_string (stream_spec.rb:18-34) differs from the real implementation:

Test version:

def render_to_string(**_opts)
  @component_queues.each do |queue|
    @async_barrier.async do
      loop do
        chunk = queue.dequeue
        @main_output_queue.enqueue(chunk)  # ALL chunks to queue
      end
    end
  end
  @initial_response
end

Real implementation (helper.rb:318-330):

stream.each_chunk do |chunk|
  if is_first
    first_chunk_var.value = chunk  # First chunk to variable
    is_first = false
  else
    @main_output_queue.enqueue(chunk)  # Rest to queue
  end
end

Issue: The test doesn't validate the critical behavior of first chunk synchronization via Async::Variable. This is the key mechanism that maintains "first chunk synchronous delivery" mentioned in the PR description.

Recommendation: Update the test to simulate the real first-chunk-variable pattern, or add a specific test case that validates first chunk handling.

5. Documentation: Caching with Async Architecture

The caching logic in handle_stream_cache_miss (helper.rb:267-278) uses @rorp_cache_buffering flag:

@rorp_cache_buffering = { key: view_cache_key, chunks: [], options: raw_options[:cache_options] || {} }
initial_result = render_stream_component_with_props(...)
@rorp_cache_buffering = nil

Then in run_stream_inside_fiber (helper.rb:318-320):

cache_config = @rorp_cache_buffering
# ... later
cache_config[:chunks] << chunk if cache_config

Issue: The closure captures cache_config reference, but @rorp_cache_buffering is set to nil (line 275) before the async task completes. This works because Ruby captures the reference, but it's subtle.

Recommendation: Add a comment explaining this intentional closure capture:

# Capture cache config reference before disabling caching
# The async task will continue using this reference even after we nil the instance var
cache_config = @rorp_cache_buffering

🔍 Additional Observations

Positive:

  1. Backpressure preserved: LimitedQueue maintains the buffer size limit
  2. Ordering preserved: Barrier ensures all component tasks complete before draining
  3. Proper cleanup: queue.close signals end-of-stream correctly

Questions:

  1. Performance: Has this been benchmarked against the old implementation? The PR mentions enabling async props as the primary goal, but any performance change would be good to document.
  2. Async gem version: Are there minimum version requirements for Async::Barrier and Async::Variable?

📋 Recommendations Summary

Priority Issue Action
🔴 Critical Instance variable lifecycle Add ensure cleanup or refactor to parameters
🟡 Medium Error handling in async tasks Ensure first_chunk_var set even on error
🟡 Medium Test doesn't match real implementation Add test for first-chunk-variable behavior
🟢 Low Race condition in cache hit Document or add comment about ordering
🟢 Low Caching closure capture Add clarifying comment

🎯 Verdict

This is a well-designed refactoring that achieves its stated goal of enabling async code in streaming contexts. The code simplification is real and meaningful. However, the instance variable lifecycle issue should be addressed before merging to avoid potential memory leaks or state pollution in threaded environments.

Recommendation: Request changes to address the critical instance variable cleanup issue. The other concerns are enhancements that could be addressed in follow-up PRs if time is tight.

Great work on the clear documentation and systematic approach to this refactoring! 🚀


Reviewed with attention to code quality, thread safety, async patterns, and Rails conventions per CLAUDE.md guidelines.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

299-344: Prevent deadlock in run_stream_inside_fiber when streaming fails before first chunk

run_stream_inside_fiber uses Async::Variable to synchronize the first chunk and then blocks with first_chunk_var.wait. If the async task raises before assigning to first_chunk_var (e.g., yield fails, the returned stream raises in each_chunk, or there’s an early error before any chunk is produced), first_chunk_var.wait will never be satisfied, and the calling code can deadlock.

This is the same issue noted in the previous review; it’s still present here since there’s no rescue/ensure around the async block to guarantee that first_chunk_var is set.

A minimal fix is to ensure first_chunk_var is always assigned, even on error, and to keep the cache write consistent:

def run_stream_inside_fiber
  require "async/variable"

  if @async_barrier.nil?
    raise ReactOnRails::Error,
          "You must call stream_view_containing_react_components to render the view containing the react component"
  end

  first_chunk_var = Async::Variable.new
  cache_config = @rorp_cache_buffering

  @async_barrier.async do
    is_first = true
    begin
      stream = yield

      stream.each_chunk do |chunk|
        cache_config[:chunks] << chunk if cache_config

        if is_first
          first_chunk_var.value = chunk
          is_first = false
        else
          @main_output_queue.enqueue(chunk)
        end
      end

      # No chunks case:
      first_chunk_var.value = nil if is_first

      if cache_config
        Rails.cache.write(cache_config[:key], cache_config[:chunks], cache_config[:options])
      end
    rescue => e
      # Unblock caller even on error:
      first_chunk_var.value = nil if is_first
      # Optionally log or re-raise to propagate the failure:
      raise
    end
  end

  first_chunk_var.wait
  first_chunk_var.value
end

This keeps the happy-path behavior (first chunk returned synchronously, rest enqueued, cache populated) but guarantees the caller never waits forever if the underlying stream fails early.

You can verify the fix by forcing an exception inside the streaming block (e.g., raising from server_rendered_react_component before any chunks are produced) and confirming that the request fails fast rather than hanging.

🧹 Nitpick comments (3)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)

34-55: Barrier + LimitedQueue integration looks solid but newline comment is stale

Using Sync to initialize a per-request Async::Barrier and Async::LimitedQueue, then rendering the template and streaming the rest via drain_streams_concurrently, is a clean replacement for the previous fiber setup and should preserve concurrent streaming and backpressure.

The inline comment about stripping extra newlines no longer matches the implementation (template_string is written as-is). Either update the comment to describe the current behavior or implement the trimming logic to avoid future confusion.

react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (2)

241-265: Cache hit replays look correct; consider guarding against missing async context

The cache-hit path correctly:

  • Recomputes render_options and calls load_pack_for_generated_component.
  • Splits cached_chunks into initial_result and rest_chunks.
  • Uses an async task to enqueue the remaining chunks onto @main_output_queue while returning initial_result synchronously via Async::Variable.

Two small robustness points:

  1. This method implicitly assumes @async_barrier and @main_output_queue are already set up by stream_view_containing_react_components. If someone accidentally calls cached_stream_react_component outside a streaming view, they’ll get a less obvious failure later. Mirroring run_stream_inside_fiber’s guard with a friendly error would make this safer:
raise ReactOnRails::Error, "You must call stream_view_containing_react_components ..." if @async_barrier.nil? || @main_output_queue.nil?
  1. If cached_chunks is unexpectedly empty, initial_result will be nil. That will still work (ERB treats it as empty string), but you may want an explicit check with a clearer error or log in that case so cache corruption doesn’t silently degrade streaming.

267-278: Cache miss buffering via @rorp_cache_buffering is sound but relies on convention

The cache-miss path enables buffering by setting @rorp_cache_buffering, then delegates to render_stream_component_with_props, which ultimately uses run_stream_inside_fiber. Because run_stream_inside_fiber snapshots cache_config = @rorp_cache_buffering into a local before starting the async task, resetting @rorp_cache_buffering to nil afterwards is safe even with multiple concurrent streaming components.

Given that this helper is intended only for use inside streaming views, it would still be worth:

  • Adding the same async-context guard as run_stream_inside_fiber (and as suggested for handle_stream_cache_hit) to fail fast if it’s called from a non-streamed template.
  • Optionally documenting that @rorp_cache_buffering is an internal flag and not part of the public surface, to discourage accidental reuse.

Functionally, the current implementation is correct.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c4d570f and effc7a8.

📒 Files selected for processing (3)
  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (2 hunks)
  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1 hunks)
  • react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.

Applied to files:

  • react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-04-09T12:56:10.756Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/RSCPayloadContainer.ts:0-0
Timestamp: 2025-04-09T12:56:10.756Z
Learning: In the react_on_rails codebase, RSC payloads are already stringified using `JSON.stringify()` before being processed by the `escapeScript` function, which handles escaping of special characters. The function only needs to handle specific HTML markers like comments and closing script tags.

Applied to files:

  • react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
🧬 Code graph analysis (3)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (4)
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (1)
  • configuration (9-38)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
  • render_to_string (18-34)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (2)
  • render_to_string (14-14)
  • response (15-15)
packages/react-on-rails-pro/tests/AsyncQueue.ts (1)
  • dequeue (27-54)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
packages/react-on-rails-pro/tests/AsyncQueue.ts (2)
  • dequeue (27-54)
  • enqueue (10-20)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (2)
lib/react_on_rails/react_component/render_options.rb (1)
  • auto_load_bundle (74-76)
lib/react_on_rails/helper.rb (1)
  • load_pack_for_generated_component (333-349)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: examples (3.4, latest)
  • GitHub Check: examples (3.2, minimum)
  • GitHub Check: pro-lint-js-and-ruby
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: claude-review
🔇 Additional comments (1)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)

17-33: StreamController test harness correctly mirrors barrier/queue streaming flow

The render_to_string test helper now drives per-component Async::Queues into @main_output_queue under @async_barrier, matching the production ReactOnRailsPro::Stream pattern. Closing the per-component queues in the specs ensures the dequeue loop terminates once they’re empty, so the async tasks won’t hang.

This looks consistent with Async’s documented behavior (closed-and-empty queues returning nil from dequeue) and with the new barrier-based drain model in the concern.

If you want to double-check the queue semantics against your installed async version, you can run something like:

require "async"
require "async/queue"

Async do |task|
  q = Async::Queue.new
  consumer = task.async { q.dequeue }
  q.close
  puts consumer.wait.inspect # Expect nil once closed and empty
end

Comment on lines +61 to 75
def drain_streams_concurrently(parent_task)
writing_task = parent_task.async do
# Drain all remaining chunks from the queue to the response stream
while (chunk = @main_output_queue.dequeue)
response.stream.write(chunk)
end
end
end

def build_writer_task(parent:, queue:)
parent.async do
loop do
pair = queue.dequeue
break if pair.nil?
# Wait for all component streaming tasks to complete
@async_barrier.wait

_idx_from_queue, item = pair
response.stream.write(item)
end
end
# Close the queue to signal end of streaming
@main_output_queue.close
writing_task.wait
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Ensure queues and writer task are cleaned up even if a component task fails

drain_streams_concurrently assumes @async_barrier.wait completes normally before closing @main_output_queue and waiting on writing_task. If any async component task raises, @async_barrier.wait can raise as well, which means:

  • @main_output_queue.close won’t run, leaving the queue open.
  • writing_task may remain blocked in dequeue on an open, now-idle queue.

Depending on how Async::Barrier and task failure propagation behave in your exact async version, this could manifest as a hung streaming response or leaked task rather than a clean failure.

Consider wrapping the wait/close sequence so the queue is always closed and the writer task joined, even when @async_barrier.wait raises, e.g.:

def drain_streams_concurrently(parent_task)
  writing_task = parent_task.async do
    while (chunk = @main_output_queue.dequeue)
      response.stream.write(chunk)
    end
  end

  begin
    @async_barrier.wait
  ensure
    @main_output_queue.close
    writing_task.wait
  end
end

This keeps the steady-state behavior but guarantees termination of the writer loop in error paths.

To validate this, you can inject a failure into one component task (e.g., raise inside stream.each_chunk) and confirm the request doesn’t hang and all tasks complete.


I'll verify the review comment by examining the actual implementation and understanding the async behavior.Good! I found critical information. According to the documentation, Async::Barrier#wait will invoke Async::Task#wait on each waiting task, which may raise an error. This confirms the review concern. Now let me search the codebase to verify the actual implementation.

[/run_scripts]


🏁 Script executed:

#!/bin/bash
# Find the file and verify it exists
find . -name "stream.rb" -path "*/react_on_rails_pro/*" 2>/dev/null | head -5

Length of output: 62


[/run_scripts]


🏁 Script executed:

#!/bin/bash
# View the exact method and surrounding context
sed -n '50,85p' ./react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb

Length of output: 666


🏁 Script executed:

#!/bin/bash
# Search for error handling or rescue blocks in this file
grep -n -B2 -A2 "rescue\|ensure\|raise" ./react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb

Length of output: 160


🏁 Script executed:

#!/bin/bash
# Search for tests related to drain_streams_concurrently or error handling
find . -name "*.rb" -path "*/test/*" -o -path "*/spec/*" | xargs grep -l "drain_streams_concurrently\|stream.*fail\|async.*barrier" 2>/dev/null

Length of output: 59


[/run_scripts]


🏁 Script executed:

#!/bin/bash
# Examine the test file to see if error scenarios are covered
cat ./react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb

Length of output: 15492


🏁 Script executed:

#!/bin/bash
# Check if there are any other usages or dependencies on @async_barrier or @main_output_queue
grep -n "@async_barrier\|@main_output_queue" ./react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb

Length of output: 232


Ensure queues and writer task are cleaned up even if a component task fails

The concern is valid. If @async_barrier.wait raises an exception (propagated from a failed component task), the subsequent lines closing @main_output_queue and joining writing_task won't execute. This leaves the queue open and the writer task blocked on an idle queue, causing potential resource leaks and hung streaming responses. The outer begin/ensure block (lines 51–55) only closes response.stream and doesn't address internal state cleanup.

The proposed fix is correct and necessary:

def drain_streams_concurrently(parent_task)
  writing_task = parent_task.async do
    while (chunk = @main_output_queue.dequeue)
      response.stream.write(chunk)
    end
  end

  begin
    @async_barrier.wait
  ensure
    @main_output_queue.close
    writing_task.wait
  end
end

This ensures termination of the writer loop and proper task cleanup even in error paths.

🤖 Prompt for AI Agents
In react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb around lines 61
to 75, the current flow calls @async_barrier.wait and then closes
@main_output_queue and waits on writing_task, but if @async_barrier.wait raises
an exception the queue and writer are left open/blocked; wrap the
@async_barrier.wait call in a begin/ensure block so that in the ensure you
always call @main_output_queue.close and writing_task.wait to guarantee the
writer loop terminates and resources are cleaned up even when a component task
fails.

## Problem

Previously used instance variable @rorp_cache_buffering to communicate
between handle_stream_cache_miss and run_stream_inside_fiber. This was:
- Implicit and hard to follow
- Used unnecessary Async::Variable in handle_stream_cache_hit
- More complex than needed

## Solution

Introduce clean `on_complete` callback pattern:
- stream_react_component extracts on_complete from options
- run_stream_inside_fiber accepts on_complete parameter
- Callback receives all chunks when streaming completes
- Only collects chunks when callback provided (memory efficient)

## Changes

1. **stream_react_component**:
   - Extract on_complete callback from options
   - Pass to run_stream_inside_fiber

2. **run_stream_inside_fiber**:
   - Accept on_complete: nil parameter
   - Replace @rorp_cache_buffering with local all_chunks array
   - Call on_complete&.call(all_chunks) when streaming completes

3. **handle_stream_cache_miss**:
   - Use on_complete: ->(chunks) { Rails.cache.write(...) }
   - Much simpler than instance variable approach

4. **handle_stream_cache_hit**:
   - Remove unnecessary Async::Variable
   - Return initial_result directly (already have it)
   - Simpler async task to enqueue remaining chunks

## Benefits

✅ Clean callback API: on_complete: ->(chunks) { ... }
✅ No instance variables for communication
✅ Memory efficient (only collects when callback provided)
✅ Clear lifecycle (callback fires after all chunks received)
✅ Net -14 lines of code
✅ All tests passing (20 examples, 0 failures)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace Manual Fibers with Async Gem Primitives

Thank you for this PR! This is a well-thought-out refactoring that successfully achieves its stated goal of enabling async gem code within streaming components. Here's my detailed review:


✅ Strengths

1. Clear Architecture Improvement

The migration from manual Fiber management to async gem primitives creates a unified concurrency model. This is a solid architectural decision that:

  • Enables future async props implementation
  • Simplifies the code (net -40 lines)
  • Uses appropriate async primitives (Async::Barrier, Async::Variable, Async::LimitedQueue)

2. Well-Documented PR Description

The PR description excellently explains:

  • The problem ("Running scheduler on non-blocking fiber")
  • The root cause (manual Fibers vs async-managed Fibers)
  • The architectural change with before/after diagrams
  • The benefits and trade-offs

3. Maintains Existing Functionality

Good job preserving:

  • ✅ Concurrent component streaming
  • ✅ Backpressure via LimitedQueue
  • ✅ First chunk synchronous delivery
  • ✅ Error propagation (via ensure blocks)

⚠️ Issues & Concerns

1. CRITICAL: Potential Memory Leak in on_complete Callback

Location: react_on_rails_pro_helper.rb:311-335

def run_stream_inside_fiber(on_complete:)
  # ...
  all_chunks = [] if on_complete  # Only collect if callback provided
  
  @async_barrier.async do
    stream = yield
    is_first = true
    
    stream.each_chunk do |chunk|
      all_chunks << chunk if on_complete  # ⚠️ Accumulates ALL chunks in memory
      # ...
    end
    
    on_complete&.call(all_chunks) if on_complete
  end
  # ...
end

Problem: When caching is enabled, all_chunks accumulates the entire streamed response in memory for every component, even for very large responses. This defeats the purpose of streaming.

Impact:

  • Large components (e.g., RSC payloads, large HTML) will cause memory bloat
  • Multiple concurrent components will multiply this problem
  • No backpressure limit on all_chunks array growth

Suggested Fix:
Consider one of these approaches:

  1. Stream-to-cache: Write chunks to cache incrementally instead of buffering
  2. Size limit: Add a configurable max cache size and skip caching if exceeded
  3. Document: At minimum, document this limitation in the method comments

Example:

# Option 1: Stream to cache incrementally (pseudo-code)
cache_writer = CacheWriter.new(cache_key) if on_complete
stream.each_chunk do |chunk|
  cache_writer&.write(chunk)
  # ... rest of logic
end
cache_writer&.finalize if on_complete

2. Race Condition Risk: @async_barrier Initialization

Location: stream.rb:38-42

Sync do |parent_task|
  @async_barrier = Async::Barrier.new
  # ...
  template_string = render_to_string(template: template, **render_options)
  # ⚠️ Components call run_stream_inside_fiber during this render

Concern: While @async_barrier is initialized before render_to_string, the reliance on instance variables for thread-local state is fragile:

  1. If render_to_string somehow runs asynchronously or in a different context
  2. If components are rendered in a different thread/fiber
  3. If multiple concurrent requests share the same controller instance

Why it works now: Rails controllers are instantiated per-request, and render_to_string is synchronous.

Suggested improvements:

  1. Add a guard in run_stream_inside_fiber:

    if @async_barrier.nil?
      raise ReactOnRails::Error,
            "You must call stream_view_containing_react_components to render the view containing the react component"
    end

    ✅ You already have this! Good.

  2. Consider documenting the thread-safety assumptions in comments.


3. Error Handling: Empty Stream Edge Case

Location: react_on_rails_pro_helper.rb:331-332

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Question: What happens when a stream is completely empty?

  • first_chunk_var.wait returns nil
  • run_stream_inside_fiber returns nil
  • Calling code may not expect nil

Testing: I see the test suite has "handles empty component list" but does it test a component that yields zero chunks?

Suggested addition:
Add a test case for a component that streams zero chunks:

it "handles component with empty stream" do
  empty_queue = Async::Queue.new
  empty_queue.close  # Close immediately without enqueuing
  controller = StreamController.new(component_queues: [empty_queue])
  # ... verify behavior
end

4. CRITICAL REQUIREMENT NOT MET: Missing Local Testing Evidence

Per CLAUDE.md:

CRITICAL - LOCAL TESTING REQUIREMENTS:

  1. NEVER claim a test is "fixed" without running it locally first
  2. Distinguish hypothetical fixes from confirmed fixes

PR Description states:

## Testing
- [ ] Run existing streaming integration tests  ❌ UNCHECKED
- [ ] Verify concurrent streaming still works  ❌ UNCHECKED
- [ ] Test with multiple components              ❌ UNCHECKED
- [ ] Test error handling                        ❌ UNCHECKED

Required before merge:

  1. ✅ Check all boxes in the Testing section
  2. 📝 Document test results:
    cd react_on_rails_pro
    bundle exec rspec spec/react_on_rails_pro/stream_spec.rb
    # Run integration tests with actual Rails app
    cd spec/dummy
    # ... run E2E tests with streaming components
  3. 🔬 Test the specific scenario mentioned in the PR: "run async gem code inside each_chunk"

5. Performance Consideration: Barrier Wait Behavior

Location: stream.rb:69-70

# Wait for all component streaming tasks to complete
@async_barrier.wait

Analysis: Async::Barrier#wait blocks until ALL tasks complete. This means:

  • Good: Ensures all chunks are processed before closing
  • ⚠️ Concern: One slow component blocks the entire response from finishing
  • ⚠️ Concern: Error in one component may prevent others from completing

Current behavior:

  • If Component A streams quickly but Component B hangs, the response won't close until B completes
  • This is probably desired behavior, but worth documenting

Suggested documentation:

# Wait for all component streaming tasks to complete.
# Note: This blocks until ALL components finish streaming. A slow or hanging
# component will delay the entire response completion.
@async_barrier.wait

🔍 Code Quality Notes

1. Good: Defensive Nil Check

on_complete&.call(all_chunks) if on_complete

Using both safe navigation (&.) and conditional check is redundant but harmless. Pick one:

  • on_complete&.call(all_chunks) (preferred)
  • on_complete.call(all_chunks) if on_complete

2. Good: Code Simplification

The drain_streams_concurrently method went from 45 lines to 11 lines. The old code had:

  • build_producer_tasks - now inlined in run_stream_inside_fiber
  • build_writer_task - now simplified to 5 lines
  • Complex queue management - now clearer with barrier + variable pattern

3. Excellent: Test Coverage

The spec file (stream_spec.rb) has comprehensive tests:

  • ✅ Concurrent streaming
  • ✅ Per-component ordering
  • ✅ Empty component lists
  • ✅ Backpressure with slow writer
  • ✅ Single component streaming

Suggested addition: Add test for the new on_complete callback behavior.


🔒 Security Review

No security concerns identified. The changes are internal refactoring:

  • ✅ No new user input handling
  • ✅ No changes to authentication/authorization
  • ✅ No new external dependencies (async gem already used)
  • ✅ Proper error handling preserved

📋 Before Merging Checklist

Must-Do:

  • Complete all testing checkboxes in PR description
  • Run full test suite locally: cd react_on_rails_pro && bundle exec rspec
  • Address memory leak concern in all_chunks accumulation (or document as known limitation)
  • Test the actual use case: Run async gem code inside each_chunk and verify no "Running scheduler on non-blocking fiber" error
  • Add test case for empty stream scenario

Nice-to-Have:

  • Document thread-safety assumptions
  • Add inline comment about barrier wait blocking behavior
  • Add test for on_complete callback
  • Consider size limits for cache accumulation

🎯 Final Verdict

Approve with required changes.

This is a well-architected refactoring that achieves its goal. The main concerns are:

  1. CRITICAL: Memory accumulation in all_chunks for caching
  2. CRITICAL: Local testing not completed (per CLAUDE.md requirements)
  3. Minor: Empty stream edge case testing

The async primitives migration is sound, and the code simplification is valuable. Once the memory concern is addressed and local testing is completed, this will be ready to merge.


Great work on the thorough PR description and thoughtful architecture! 🎉

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

249-280: Cached streaming currently ignores or inconsistently applies on_complete.

In the cached streaming helpers, on_complete behavior diverges between cache MISS and HIT, and a user-supplied on_complete can be silently overwritten:

  • In handle_stream_cache_miss, cache_aware_options = raw_options.merge(on_complete: ->(chunks) { ... }) overwrites any user-provided raw_options[:on_complete], so callers of cached_stream_react_component won’t see their callback invoked on the initial (miss) render.
  • In handle_stream_cache_hit, raw_options[:on_complete] is never used, so callbacks are never invoked on cache hits either.
  • This contradicts the documented contract that “all the same options as stream_react_component apply” to cached_stream_react_component.

A small change can preserve the user callback and make HIT/MISS semantics consistent while still using on_complete internally for caching:

 def handle_stream_cache_hit(component_name, raw_options, auto_load_bundle, cached_chunks)
   render_options = ReactOnRails::ReactComponent::RenderOptions.new(
     react_component_name: component_name,
     options: { auto_load_bundle: auto_load_bundle }.merge(raw_options)
   )
   load_pack_for_generated_component(component_name, render_options)

   initial_result, *rest_chunks = cached_chunks

   # Enqueue remaining chunks asynchronously
   @async_barrier.async do
     rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
   end
+
+  # Invoke any caller-provided on_complete callback with all chunks
+  if (on_complete = raw_options[:on_complete])
+    on_complete.call(cached_chunks)
+  end
+
   # Return first chunk directly
   initial_result
 end

 def handle_stream_cache_miss(component_name, raw_options, auto_load_bundle, view_cache_key, &block)
-  cache_aware_options = raw_options.merge(
-    on_complete: ->(chunks) {
-      Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
-    }
-  )
+  user_on_complete = raw_options[:on_complete]
+  cache_aware_options = raw_options.merge(
+    on_complete: ->(chunks) {
+      Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
+      user_on_complete&.call(chunks)
+    }
+  )

   render_stream_component_with_props(
     component_name,
     cache_aware_options,
     auto_load_bundle,
     &block
   )
 end

This way:

  • On a MISS, chunks are cached and then any user callback runs once the full set of chunks is known.
  • On a HIT, all cached chunks are already available, so the user callback can be invoked immediately while the remaining chunks are asynchronously enqueued to the main output queue.
♻️ Duplicate comments (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

301-341: Ensure run_stream_inside_fiber never deadlocks waiting for the first chunk.

As written, if the block passed to run_stream_inside_fiber or stream.each_chunk raises before assigning first_chunk_var.value, the final first_chunk_var.wait will block indefinitely and the request will hang. Examples include:

  • The internal renderer raising before returning a stream.
  • The stream raising before yielding any chunks (e.g., upstream error fetching data).

This is the same class of issue that was highlighted in an earlier review; the new async implementation still needs an explicit guarantee that first_chunk_var is set on all code paths (success and failure) so the caller is always unblocked.

You can wrap the async block in a begin/rescue and set first_chunk_var in the error path when we haven’t produced a first chunk yet:

 def run_stream_inside_fiber(on_complete:)
   require "async/variable"

   if @async_barrier.nil?
     raise ReactOnRails::Error,
           "You must call stream_view_containing_react_components to render the view containing the react component"
   end

   # Create a variable to hold the first chunk for synchronous return
   first_chunk_var = Async::Variable.new
-  all_chunks = [] if on_complete  # Only collect if callback provided
-
-  # Start an async task on the barrier to stream all chunks
-  @async_barrier.async do
-    stream = yield
-    is_first = true
-
-    stream.each_chunk do |chunk|
-      all_chunks << chunk if on_complete  # Collect for callback
-
-      if is_first
-        # Store first chunk in variable for synchronous access
-        first_chunk_var.value = chunk
-        is_first = false
-      else
-        # Enqueue remaining chunks to main output queue
-        @main_output_queue.enqueue(chunk)
-      end
-    end
-
-    # Handle case where stream has no chunks
-    first_chunk_var.value = nil if is_first
-
-    # Call callback with all chunks when streaming completes
-    on_complete&.call(all_chunks) if on_complete
-  end
+  all_chunks = [] if on_complete  # Only collect if callback provided
+
+  # Start an async task on the barrier to stream all chunks
+  @async_barrier.async do
+    is_first = true
+
+    begin
+      stream = yield
+
+      stream.each_chunk do |chunk|
+        all_chunks << chunk if on_complete  # Collect for callback
+
+        if is_first
+          # Store first chunk in variable for synchronous access
+          first_chunk_var.value = chunk
+          is_first = false
+        else
+          # Enqueue remaining chunks to main output queue
+          @main_output_queue.enqueue(chunk)
+        end
+      end
+
+      # Handle case where stream has no chunks
+      if is_first
+        first_chunk_var.value = nil
+        is_first = false
+      end
+
+      # Call callback with all chunks when streaming completes
+      on_complete&.call(all_chunks) if on_complete
+    rescue => e
+      # Ensure the waiting caller is always unblocked on error
+      if is_first
+        first_chunk_var.value = nil
+        is_first = false
+      end
+      raise
+    end
+  end

   # Wait for and return the first chunk (blocking)
   first_chunk_var.wait
   first_chunk_var.value
 end

This keeps the existing behavior (including on_complete) while guaranteeing the caller never blocks forever and still allowing errors to propagate via the barrier.

Please double-check this against the current async gem docs to confirm that setting Async::Variable#value in both success and rescue paths is safe and that wait behaves as expected when the value is set before or during the wait.

🧹 Nitpick comments (2)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (2)

126-138: On-complete plumbing through stream_react_component looks solid; consider documenting callback shape.

Extracting :on_complete and keeping it out of the options passed to internal_stream_react_component is correct and keeps the React renderer options clean. To make this discoverable, you may want to document :on_complete in the method’s YARD-style param docs (e.g., that it receives an array of all streamed chunks once streaming completes).


188-199: RSC payload streaming now supports on_complete; no extra RSC checks needed here.

Plumbing :on_complete through rsc_payload_react_component into run_stream_inside_fiber mirrors the HTML streaming path and looks consistent. Given prior decisions that RSC enablement is validated deeper in the stack, it’s good that no extra configuration checks were added here (avoids redundant validation). If you intend on_complete to be public for RSC as well, consider briefly documenting it alongside the other options.

Based on learnings, RSC support validation is intentionally handled in lower-level code paths rather than in this helper.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between effc7a8 and 6c48628.

📒 Files selected for processing (1)
  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (4 hunks)
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation in React on Rails Pro is handled through a chain of validations:
1. Pro version check in `run_stream_inside_fiber`
2. RSC support check during pack generation via `ReactOnRailsPro.configuration.enable_rsc_support`
3. RSC support validation during component registration
This makes additional validation in the helper methods unnecessary.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-02-13T16:50:47.848Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: node_package/src/clientStartup.ts:18-21
Timestamp: 2025-02-13T16:50:47.848Z
Learning: In the react_on_rails module, the `reactOnRailsPageUnloaded` function in clientStartup.ts is intentionally kept private as it's only used internally as a callback for `onPageUnloaded`.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation is handled in deeper level calls of the React on Rails Pro codebase, so it doesn't need to be validated again in the `rsc_payload_react_component` helper method.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-09-16T08:01:11.146Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1770
File: lib/generators/react_on_rails/templates/base/base/app/javascript/src/HelloWorld/ror_components/HelloWorld.client.jsx:2-2
Timestamp: 2025-09-16T08:01:11.146Z
Learning: React on Rails uses webpack CSS Modules configuration with namedExports: true, which requires the import syntax `import * as style from './file.module.css'` rather than the default export pattern. This configuration enables better tree shaking and bundle size optimization for CSS modules.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-04-09T12:56:10.756Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/RSCPayloadContainer.ts:0-0
Timestamp: 2025-04-09T12:56:10.756Z
Learning: In the react_on_rails codebase, RSC payloads are already stringified using `JSON.stringify()` before being processed by the `escapeScript` function, which handles escaping of special characters. The function only needs to handle specific HTML markers like comments and closing script tags.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-04-26T21:55:55.874Z
Learnt from: alexeyr-ci2
Repo: shakacode/react_on_rails PR: 1732
File: spec/dummy/client/app-react16/startup/ReduxSharedStoreApp.client.jsx:40-44
Timestamp: 2025-04-26T21:55:55.874Z
Learning: In the react_on_rails project, files under `app-react16` directories are copied/moved to corresponding `/app` directories during the conversion process (removing the `-react16` suffix), which affects their relative import paths at runtime.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: examples (3.2, minimum)
  • GitHub Check: examples (3.4, latest)
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: pro-lint-js-and-ruby
  • GitHub Check: claude-review

Replace @rorp_rendering_fibers with @async_barrier and @main_output_queue
in test setup. Update all tests to:

- Wrap in Sync blocks to provide async context
- Use @async_barrier.wait and @main_output_queue.close for synchronization
- Collect chunks from queue instead of calling fiber.resume
- Remove timing-dependent assertions on chunks_read.count

This completes the migration from manual Fiber management to async gem
primitives for component streaming tests.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

I've reviewed this PR in detail. Overall, this is a well-executed architectural improvement with significant benefits. Here's my detailed feedback:


Strengths

1. Clear Problem Definition and Solution

  • The PR description excellently explains the why and the how
  • Architecture diagrams showing before/after are very helpful
  • Commit messages are detailed and well-structured

2. Code Quality

  • Significant simplification: -40 lines total
  • Clean abstraction with on_complete callback pattern
  • Good separation of concerns
  • Proper error handling maintained

3. Test Coverage

  • Tests comprehensively updated to match new architecture
  • All tests passing (20 examples, 0 failures)
  • Test refactoring follows same async primitives pattern

Critical Issues

1. RuboCop Compliance (BLOCKING)

Per CLAUDE.md, you MUST run bundle exec rubocop in react_on_rails_pro before merging. This is a hard requirement.


Medium Priority Recommendations

2. Memory Management Concern

In run_stream_inside_fiber, all_chunks buffers ALL chunks in memory when caching is enabled. For large streams, this could consume significant memory.

Recommendation: Add documentation comments about memory implications and consider adding warnings for large streams.

3. Error Handling Edge Case

What happens if stream.each_chunk raises an exception mid-stream? The on_complete callback won't be called with partial chunks, and partial chunks may remain in the queue.

Recommendation: Consider wrapping streaming logic in a rescue block for proper cleanup.


Low Priority Recommendations

4. Race Condition Documentation

In drain_streams_concurrently, the synchronization logic is correct but deserves comments explaining that barrier.wait ensures all producers finish before signaling EOF.

5. Test Implementation Details

Test expectations changed from eq(1) to eq(3), suggesting tests now verify implementation details rather than observable behavior.

6. Documentation Gaps

Missing YARD docs for on_complete parameter and no migration guide for users who may have customized the old implementation.


Security and Performance

Security: No obvious issues. Backpressure via LimitedQueue is maintained (good for DoS protection).

Performance: Positive changes include concurrent chunk writing and simplified code paths. Potential regression from async gem overhead should be benchmarked.


Testing Recommendations

Before merging:

  1. MANDATORY: cd react_on_rails_pro && bundle exec rubocop
  2. Run full test suite: bundle exec rake
  3. Run specific streaming tests
  4. Manual testing with large payloads, errors, and caching scenarios

Final Verdict

Overall Assessment: Approve with minor recommendations

This is high-quality work that:

  • Enables async code in each_chunk (primary goal)
  • Simplifies codebase significantly
  • Maintains all existing functionality
  • Uses clean, understandable code

Before merging:

  1. MUST: Run bundle exec rubocop and fix violations
  2. SHOULD: Add memory management documentation
  3. SHOULD: Add YARD docs for on_complete
  4. CONSIDER: Error handling improvements

Risk Level: Low-Medium

Per CLAUDE.md guidelines, also consider adding an entry to CHANGELOG_PRO.md as this is a user-visible architectural change.

Great work overall! Address the RuboCop check and consider the recommendations before merging.

Generated with Claude Code

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (1)

364-377: Async streaming patterns verified as correct; optional DRY refactor recommended

The async patterns you've used are confirmed to be correct and follow the async gem's intended approach. After Async::Queue#close, calling #dequeue returns nil once the queue is drained, and the sequence barrier.waitqueue.closewhile (chunk = queue.dequeue) is the recommended way to consume all remaining items.

The test scaffolding correctly simulates the streaming context that stream_view_containing_react_components provides, and your drain pattern mirrors the drain_streams_concurrently behavior for deterministic chunk collection.

Optional refactor: The Sync + barrier/queue initialization and wait/close/dequeue loops repeat across lines 364–377, 390–392, 394–423, 438–451, 721–742, and 782–788. Extracting these into small helper methods (e.g., with_streaming_async_context and wait_and_collect_stream_chunks) would reduce duplication and make future streaming changes easier to propagate across the spec.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6c48628 and 204561c.

📒 Files selected for processing (1)
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (6 hunks)
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: theforestvn88
Repo: shakacode/react_on_rails PR: 1620
File: spec/dummy/client/app/startup/HelloTurboStream.jsx:3-3
Timestamp: 2024-10-08T20:53:47.076Z
Learning: The `RailsContext` import in `spec/dummy/client/app/startup/HelloTurboStream.jsx` is used later in the project, as clarified by the user theforestvn88.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation in React on Rails Pro is handled through a chain of validations:
1. Pro version check in `run_stream_inside_fiber`
2. RSC support check during pack generation via `ReactOnRailsPro.configuration.enable_rsc_support`
3. RSC support validation during component registration
This makes additional validation in the helper methods unnecessary.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-04-09T13:02:16.009Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/transformRSCNodeStream.ts:3-31
Timestamp: 2025-04-09T13:02:16.009Z
Learning: In the React Server Components (RSC) stream processing implementation, all chunks end with a newline character (`\n`), so the `lastIncompleteChunk` variable is expected to be empty by the time the stream ends, making a flush handler unnecessary in the Transform stream.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-04-09T12:56:10.756Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/RSCPayloadContainer.ts:0-0
Timestamp: 2025-04-09T12:56:10.756Z
Learning: In the react_on_rails codebase, RSC payloads are already stringified using `JSON.stringify()` before being processed by the `escapeScript` function, which handles escaping of special characters. The function only needs to handle specific HTML markers like comments and closing script tags.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1622
File: spec/dummy/spec/rake/assets_precompile_rake_spec.rb:12-12
Timestamp: 2024-10-08T20:53:47.076Z
Learning: When stubbing environment variables in RSpec tests, use `before` and `after` hooks to ensure that the original values are restored after the tests, preventing any side effects on other tests. Example provided by justin808:
```ruby
describe "My test" do
  before do
    original_value = ENV["VARIABLE_NAME"]
    allow(ENV).to receive(:[]).with("VARIABLE_NAME").and_return("stubbed_value")
  end

  after do
    allow(ENV).to receive(:[]).with("VARIABLE_NAME").and_call_original
    ENV["VARIABLE_NAME"] = original_value
  end

  it "tests something" do
    # Your test code here
  end
end
```
This practice ensures test isolation and reliability.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
🧬 Code graph analysis (1)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (2)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
  • include (8-35)
spec/dummy/spec/helpers/react_on_rails_helper_spec.rb (1)
  • include (6-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: examples (3.4, latest)
  • GitHub Check: examples (3.2, minimum)
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: pro-lint-js-and-ruby
  • GitHub Check: claude-review
🔇 Additional comments (1)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (1)

3-6: Async::Barrier require is appropriate and verified

The canonical require path for using Async::Barrier is require 'async/barrier', confirming that line 5 is correct. No issues found.

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

I've thoroughly reviewed this PR. Overall, this is a well-architected refactoring that modernizes the concurrency model. The code quality is high, and the approach is sound. Here are my detailed findings:


✅ Strengths

1. Excellent Architecture Decision

  • Replacing manual Fiber management with async gem primitives is the right approach for enabling async operations inside each_chunk
  • The use of Async::Barrier, Async::Variable, and Async::LimitedQueue shows good understanding of async patterns
  • Clear separation of concerns between synchronous first chunk delivery and async streaming

2. Code Simplification

  • 34 lines removed from stream.rb with improved clarity
  • Eliminated complex build_producer_tasks and build_writer_task methods
  • The flow is now easier to follow and reason about

3. Smart Caching Design

  • Using on_complete callback for cache writes is elegant and decouples caching logic from streaming
  • The callback approach in handle_stream_cache_miss (lines 268-272) is cleaner than the previous Fiber-wrapping approach

4. Test Coverage

  • Tests properly updated to use async primitives
  • Good use of around blocks to provide async context
  • Tests verify both synchronous first chunk and async remainder behavior

🔍 Potential Issues & Recommendations

1. Critical: Memory Leak Risk with all_chunks Array

Location: react_on_rails_pro_helper.rb:311

all_chunks = [] if on_complete # Only collect if callback provided

Issue: For components with many chunks or large payloads, this array accumulates all chunks in memory before calling the callback. This could cause memory bloat in production.

Recommendation: Consider streaming to cache incrementally if the async gem supports it, or add configuration to limit chunk buffering.

2. Error Handling: Barrier Failure Propagation

Location: stream.rb:70

@async_barrier.wait

Question: What happens if one component's async task fails? Does @async_barrier.wait properly propagate exceptions?

Testing Needed: Add a test case where one component throws an error during streaming to verify:

  • Other components continue streaming
  • OR all components abort gracefully
  • Error is properly propagated to the controller

3. Resource Cleanup: Queue Not Explicitly Closed on Error

Location: stream.rb:51-56

begin
  drain_streams_concurrently(parent_task)
ensure
  response.stream.close if close_stream_at_end
end

Issue: If drain_streams_concurrently raises an exception before line 73 (@main_output_queue.close), the queue may not be closed, potentially leaving the writing_task blocked.

Recommendation: Wrap queue operations in a nested ensure block:

def drain_streams_concurrently(parent_task)
  writing_task = parent_task.async do
    while (chunk = @main_output_queue.dequeue)
      response.stream.write(chunk)
    end
  end

  begin
    @async_barrier.wait
  ensure
    @main_output_queue.close
    writing_task.wait
  end
end

4. Empty Stream Edge Case

Location: react_on_rails_pro_helper.rb:332

first_chunk_var.value = nil if is_first

Good: This handles empty streams correctly.

Question: Should empty streams trigger the on_complete callback with an empty array? Currently they do (line 335), which seems correct, but worth confirming this is intended behavior.

5. Test Modification: Behavioral Change

Location: react_on_rails_pro_helper_spec.rb:392

# NOTE: With async architecture, chunks are consumed in background immediately,
expect(chunks_read.count).to eq(3)

Observation: The old implementation consumed chunks lazily (on Fiber.resume), while the new implementation consumes eagerly in the background.

Implications:

  • ✅ This is fine for performance (chunks processed concurrently)
  • ⚠️ But means backpressure now depends entirely on LimitedQueue capacity
  • Document this behavioral change if not already in the PR description

🔒 Security Considerations

No Major Security Issues Found

  • ✅ No injection vulnerabilities introduced
  • ✅ Error messages don't leak sensitive information
  • ✅ Stream closing is handled properly in ensure blocks
  • ✅ No changes to authentication/authorization logic

⚡ Performance Considerations

Positive Changes:

  1. Concurrent execution maintained via Async::Barrier
  2. Backpressure preserved via LimitedQueue (buffer_size from config)
  3. First chunk still synchronous - no regression in perceived load time

Potential Concerns:

  1. Memory usage: all_chunks array accumulates entire response in memory for caching (see issue TODO for first version #1 above)
  2. Eager consumption: All chunks consumed immediately vs. on-demand with Fibers (likely a wash in practice)

📋 Testing Recommendations

Based on CLAUDE.md requirements, please verify:

  1. Local testing (CRITICAL per CLAUDE.md):

    cd react_on_rails_pro
    bundle exec rspec spec/react_on_rails_pro/stream_spec.rb
    bundle exec rspec spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
  2. Error handling test (add new test):

    • Simulate a component that raises an error mid-stream
    • Verify other components continue OR all abort gracefully
  3. Memory profiling (optional but recommended):

    • Test with components generating 100+ chunks
    • Monitor memory usage with all_chunks array accumulation
  4. Integration test (verify existing tests pass):

    • Multi-component streaming
    • Caching with multiple components
    • Empty component streams

✅ Checklist from PR Description

From the PR, these are marked as TODO:

  • Run existing streaming integration tests
  • Verify concurrent streaming still works
  • Test with multiple components
  • Test error handling

Recommendation: Mark these as completed (with checkmarks) before merging to signal readiness.


🎯 Final Verdict

Recommendation: Approve with minor fixes

This is a solid refactoring that achieves its primary goal (enabling async code in each_chunk). The code is cleaner and more maintainable.

Before merging:

  1. Add ensure block to drain_streams_concurrently for queue cleanup (critical)
  2. Add test case for error handling during streaming (recommended)
  3. Consider memory implications of all_chunks array for large responses (optional - can be follow-up)
  4. Complete the testing checklist in the PR description

Great work on this refactoring! The async gem integration is well thought out, and the simplified code will be easier to maintain going forward.


📚 References

  • Async gem documentation
  • Project conventions: CLAUDE.md in repository
  • Related: Enables upcoming React on Rails async props feature

Review generated by Claude Code - Available for follow-up questions or clarifications

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

249-261: Add guard to handle_stream_cache_hit for consistent error messaging

The cache hit path (line 259) calls @async_barrier.async without checking if the async context exists, creating an asymmetry with the cache miss path, which eventually reaches run_stream_inside_fiber with its explicit guard at lines 304-307. If cached_stream_react_component is called outside stream_view_containing_react_components, the cache hit returns a cryptic NoMethodError, while cache miss would raise a clear error message.

Add this guard before line 259:

raise ReactOnRails::Error, "You must call stream_view_containing_react_components to render the view containing the react component" if @async_barrier.nil?

This ensures consistent, actionable error messages regardless of cache state.

♻️ Duplicate comments (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

301-341: Ensure run_stream_inside_fiber cannot hang if streaming fails before setting first_chunk_var

The new Async-based run_stream_inside_fiber has a solid structure (barrier-scoped task, Async::Variable for sync first-chunk return, and optional all_chunks collection for on_complete). However, there’s still a subtle failure mode:

  • If yield (e.g., internal_stream_react_component / internal_rsc_payload_react_component) or stream.each_chunk raises before any chunk is processed, first_chunk_var.value is never set.
  • The outer call to first_chunk_var.wait will then block indefinitely unless the Async runtime aborts the current task in a way that raises here, which is not guaranteed by this method’s local logic.

This is the same deadlock risk previously flagged in an earlier review on this PR; the current version appears to have reintroduced it by removing the defensive assignment in the exception path. To make this bulletproof, the async block should guarantee that first_chunk_var.value is set exactly once, even on error, before re-raising:

  @async_barrier.async do
-   stream = yield
-   is_first = true
-
-   stream.each_chunk do |chunk|
-     all_chunks << chunk if on_complete
-     if is_first
-       first_chunk_var.value = chunk
-       is_first = false
-     else
-       @main_output_queue.enqueue(chunk)
-     end
-   end
-
-   first_chunk_var.value = nil if is_first
-   on_complete&.call(all_chunks)
+   begin
+     stream = yield
+     is_first = true
+
+     stream.each_chunk do |chunk|
+       all_chunks << chunk if on_complete
+
+       if is_first
+         first_chunk_var.value = chunk
+         is_first = false
+       else
+         @main_output_queue.enqueue(chunk)
+       end
+     end
+
+     # Handle case where stream has no chunks
+     first_chunk_var.value = nil if is_first
+     on_complete&.call(all_chunks)
+   rescue => e
+     # Ensure waiter is unblocked even on failure
+     first_chunk_var.value = nil if first_chunk_var.value.nil?
+     raise
+   end
  end

You can also simplify the caller to return first_chunk_var.wait and drop the separate .value read.

This ensures that:

  • callers never hang on wait, and
  • failures still propagate (so error handling/circuit-breaking happens as expected).
In the ruby 'async' gem, does `Async::Variable#wait` automatically raise if the task that was supposed to set its value fails before assignment, or can it block indefinitely unless the code explicitly sets a value in error paths?
🧹 Nitpick comments (4)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (3)

5-5: Rubocop pragma and async require look appropriate, but scope could be narrower

The added require "async/barrier" matches the new Async usage in this spec, and the RSpec/InstanceVariable disable/enable is justified given the shared @async_barrier, @main_output_queue, and other ivars across examples. If you want to keep cops tighter, consider limiting the disable/enable region to just the streaming-related describe blocks rather than the entire file, but it's not strictly necessary.

Also applies to: 11-12, 820-820


784-790: Around hook for attribution specs is consistent but duplicated with prior setup

The attribution specs now use their own around hook to set up Sync, @async_barrier, and @main_output_queue, which is consistent with the earlier streaming specs. Given this pattern is repeated in multiple contexts (#stream_react_component examples and here), you may want to factor out a small shared helper (e.g., with_stream_async_context) to reduce duplication and keep future changes to the async harness in one place.


365-379: Use Async::LimitedQueue in test setup to match production implementation

The queue drain logic is sound—Async::Queue#dequeue correctly returns nil when closed and empty, so both the current while pattern and the suggested explicit break if nil pattern work without risk of hanging. However, there's a queue type mismatch: production uses Async::LimitedQueue (stream.rb line 42), but the test initializes a plain Async::Queue (line 371). Using the same queue type reduces behavioral drift between tests and runtime.

Change line 371 from:

@main_output_queue = Async::Queue.new

to:

@main_output_queue = Async::LimitedQueue.new(buffer_size)

Alternatively, if buffer sizing isn't needed for tests, you may document why the plain Queue is preferred. The explicit nil-break style suggestion is optional—both patterns are sound and idiomatic.

react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

132-135: New on_complete callback plumbing is clean but undocumented in the helper docs

Extracting :on_complete from options and passing it into run_stream_inside_fiber(on_complete: ...) is a tidy way to introduce a streaming-completion callback without leaking this option down into the React renderer.

To make this discoverable and avoid confusion for users, consider updating the yard-style documentation above stream_react_component and rsc_payload_react_component to mention that:

  • :on_complete is an accepted option,
  • it receives an array of all streamed chunks (including the first), and
  • it runs once after the stream has fully completed.

This keeps the public API and implementation in sync.

Also applies to: 193-196

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 204561c and d6c04d2.

📒 Files selected for processing (2)
  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (4 hunks)
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (7 hunks)
🧰 Additional context used
🧠 Learnings (10)
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation in React on Rails Pro is handled through a chain of validations:
1. Pro version check in `run_stream_inside_fiber`
2. RSC support check during pack generation via `ReactOnRailsPro.configuration.enable_rsc_support`
3. RSC support validation during component registration
This makes additional validation in the helper methods unnecessary.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-02-13T16:50:47.848Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: node_package/src/clientStartup.ts:18-21
Timestamp: 2025-02-13T16:50:47.848Z
Learning: In the react_on_rails module, the `reactOnRailsPageUnloaded` function in clientStartup.ts is intentionally kept private as it's only used internally as a callback for `onPageUnloaded`.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation is handled in deeper level calls of the React on Rails Pro codebase, so it doesn't need to be validated again in the `rsc_payload_react_component` helper method.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
📚 Learning: 2025-04-09T12:56:10.756Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/RSCPayloadContainer.ts:0-0
Timestamp: 2025-04-09T12:56:10.756Z
Learning: In the react_on_rails codebase, RSC payloads are already stringified using `JSON.stringify()` before being processed by the `escapeScript` function, which handles escaping of special characters. The function only needs to handle specific HTML markers like comments and closing script tags.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.

Applied to files:

  • react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: theforestvn88
Repo: shakacode/react_on_rails PR: 1620
File: spec/dummy/client/app/startup/HelloTurboStream.jsx:3-3
Timestamp: 2024-10-08T20:53:47.076Z
Learning: The `RailsContext` import in `spec/dummy/client/app/startup/HelloTurboStream.jsx` is used later in the project, as clarified by the user theforestvn88.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-04-09T13:02:16.009Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1696
File: node_package/src/transformRSCNodeStream.ts:3-31
Timestamp: 2025-04-09T13:02:16.009Z
Learning: In the React Server Components (RSC) stream processing implementation, all chunks end with a newline character (`\n`), so the `lastIncompleteChunk` variable is expected to be empty by the time the stream ends, making a flush handler unnecessary in the Transform stream.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1622
File: spec/dummy/spec/rake/assets_precompile_rake_spec.rb:12-12
Timestamp: 2024-10-08T20:53:47.076Z
Learning: When stubbing environment variables in RSpec tests, use `before` and `after` hooks to ensure that the original values are restored after the tests, preventing any side effects on other tests. Example provided by justin808:
```ruby
describe "My test" do
  before do
    original_value = ENV["VARIABLE_NAME"]
    allow(ENV).to receive(:[]).with("VARIABLE_NAME").and_return("stubbed_value")
  end

  after do
    allow(ENV).to receive(:[]).with("VARIABLE_NAME").and_call_original
    ENV["VARIABLE_NAME"] = original_value
  end

  it "tests something" do
    # Your test code here
  end
end
```
This practice ensures test isolation and reliability.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
📚 Learning: 2025-02-13T14:29:49.267Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: spec/react_on_rails/utils_spec.rb:218-218
Timestamp: 2025-02-13T14:29:49.267Z
Learning: In RSpec tests, prefer using local variables over constants within test blocks to avoid constant redefinition warnings and maintain better test isolation.

Applied to files:

  • react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
🧬 Code graph analysis (1)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (2)
packages/react-on-rails-pro/tests/AsyncQueue.ts (1)
  • enqueue (10-20)
lib/react_on_rails/react_component/render_options.rb (1)
  • auto_load_bundle (74-76)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: examples (3.2, minimum)
  • GitHub Check: examples (3.4, latest)
  • GitHub Check: pro-lint-js-and-ruby
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: build-dummy-app-webpack-test-bundles
  • GitHub Check: claude-review
🔇 Additional comments (2)
react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)

268-279: Write-through caching via on_complete is well-integrated

The MISS path’s cache_aware_options:

cache_aware_options = raw_options.merge(
  on_complete: lambda { |chunks|
    Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
  }
)

plugs neatly into run_stream_inside_fiber’s on_complete handling so that:

  • all streamed chunks are collected server-side, and
  • the cache is populated only after the full stream completes.

This design avoids eager evaluation of props, keeps the cache shape consistent with the HIT path (Array of chunks), and correctly threads through cache_options. Nicely done.

react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (1)

723-745: Current code correctly follows async gem idioms; the review concern is unfounded

The verification confirms that using nil as a termination sentinel after Async::Queue#close is the intended/idiomatic behaviour in the ruby async gem. The bare while @main_output_queue.dequeue loop is not fragile—it's the established pattern. No refactoring is necessary.

Likely an incorrect or invalid review comment.

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

Great work on this architectural refactoring! This is a well-thought-out solution that simplifies the code while enabling future async capabilities. Here's my detailed review:


Strengths

1. Architectural Improvement

  • Unified concurrency model: Replacing manual Fiber management with async gem primitives (Barrier, Variable, LimitedQueue) is the right approach
  • Enables future work: This unblocks the async props implementation which was the primary goal
  • Code reduction: -40 lines with simpler, more maintainable logic

2. Smart Use of Async Primitives

  • Async::Barrier: Perfect for tracking multiple concurrent streaming tasks
  • Async::Variable: Elegant solution for first-chunk synchronous delivery
  • Async::LimitedQueue: Maintains backpressure control

3. Test Coverage

  • Tests properly updated to use async primitives
  • around blocks correctly wrap tests in Sync context
  • Test expectations adjusted for async behavior

🔍 Potential Issues & Suggestions

1. Critical: Thread Safety of Instance Variables

Issue: @async_barrier and @main_output_queue are controller instance variables accessed from multiple async tasks.

# In stream.rb - line 40
@async_barrier = Async::Barrier.new
@main_output_queue = Async::LimitedQueue.new

Risk: If the same controller instance handles concurrent requests (e.g., in threaded servers like Puma), these could be shared across requests.

Recommendation:

  • Consider using request-local storage or ensuring these are properly scoped per request
  • Add documentation clarifying thread-safety assumptions
  • Alternatively, use a RequestLocal pattern similar to ReactOnRails::RequestStore

2. Memory Management: on_complete Callback

Issue: When on_complete callback is provided, ALL chunks are buffered in memory:

# helper.rb line 310
all_chunks = [] if on_complete  # Collects all chunks

Risk: For large responses or high concurrency, this could consume significant memory.

Recommendations:

  • Document this memory implication in method documentation
  • Consider a streaming-friendly callback pattern (e.g., callback per chunk)
  • Or add a config option to limit chunk buffering size

3. Error Handling: Empty Stream Case

Code:

# helper.rb line 331
first_chunk_var.value = nil if is_first

Question: What happens when a component has no chunks? Does returning nil cause rendering issues?

Recommendation:

  • Add a test case for empty streams
  • Document expected behavior
  • Consider whether empty HTML or error is more appropriate

4. Queue Cleanup on Errors

Issue: If an error occurs during streaming, the queue might not be properly closed.

Recommendation:

# In drain_streams_concurrently
begin
  @async_barrier.wait
ensure
  @main_output_queue.close
  writing_task.wait
end

This ensures cleanup even on exceptions. (Current code does have ensure for stream closing but not for queue)

5. Race Condition: First Chunk Variable

Subtle issue: There's a potential race between:

  1. Task starting (@async_barrier.async)
  2. Variable wait (first_chunk_var.wait)

If the async scheduler doesn't immediately start the task, the wait could happen before the task runs, though this is unlikely with async gem's scheduling.

Recommendation: Add a comment explaining the synchronization guarantee, or consider using Async::Notification which might be more explicit for this pattern.


🎯 Code Quality & Best Practices

1. Good: Backward Compatibility

  • ✅ All existing streaming behavior preserved
  • ✅ Tests updated appropriately
  • ✅ First-chunk-synchronous pattern maintained

2. Good: Documentation

  • ✅ PR description is excellent with clear before/after architecture
  • ⚠️ Consider adding inline documentation for:
    • Thread-safety assumptions
    • Memory implications of on_complete
    • Async gem version requirements

3. Testing Suggestions

Missing test cases:

  1. Empty stream: Component with no chunks
  2. Error propagation: What happens when stream.each_chunk raises?
  3. Multiple concurrent components: Stress test with many components streaming
  4. Large chunks: Memory behavior with very large responses
  5. on_complete callback: Verify all chunks are collected correctly

Example test to add:

it "handles empty streams gracefully" do
  allow_any_instance_of(Stream).to receive(:each_chunk).and_return(enum_for(:each_chunk))
  result = stream_react_component(component_name, props: props)
  expect(result).to be_nil  # or expect appropriate fallback
end

🔒 Security Considerations

  1. Resource exhaustion: No obvious DoS vectors, but consider:

    • Maximum number of concurrent streaming components
    • Maximum chunk buffer size for on_complete
    • Timeout for hung streams
  2. Exception handling: Ensure errors in streaming don't leak sensitive info


Performance Considerations

  1. Memory overhead: Async::Variable and queue per component is acceptable
  2. Backpressure: ✅ LimitedQueue maintains backpressure
  3. Synchronization cost: First chunk synchronization adds minimal overhead

📋 Minor Issues

1. Rubocop Disable Comment

# rubocop:disable RSpec/InstanceVariable

Suggestion: Consider refactoring to avoid instance variables in tests using let or shared contexts instead of disabling rubocop.

2. Test Comment Accuracy

# NOTE: With async architecture, chunks are consumed in background immediately,
expect(chunks_read.count).to eq(3)

This comment explains the behavior change but could be clearer about why this is expected/acceptable.


Pre-Merge Checklist

Before merging, ensure:

  • Run full integration test suite locally (especially streaming tests)
  • Test with multiple concurrent components
  • Verify bundle exec rubocop passes
  • Run rake run_rspec:dummy for Pro specs
  • Update CHANGELOG_PRO.md (this is a Pro feature)
  • Consider adding inline documentation for async primitives usage

🎉 Conclusion

This is a well-designed refactoring that achieves its goals. The main concerns are around thread-safety of instance variables and memory management for the on_complete callback. These should be addressed or documented before merge.

Recommendation: Approve with minor changes (address thread-safety documentation and add test cases for edge cases).

The architectural direction is sound and this unblocks important future work. Nice job! 🚀

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Async Gem Migration for Component Streaming

I've reviewed this PR and have several observations about code quality, architecture, and potential issues.


Strengths

1. Clear Architectural Improvement

  • The migration from manual Fiber management to async gem primitives is well-motivated and addresses a real limitation (inability to run async code inside each_chunk)
  • The use of Async::Barrier and Async::Variable provides a cleaner concurrency model
  • Code reduction (60 deletions, 20 additions) while maintaining functionality is excellent

2. Good Use of Async Primitives

  • Async::Barrier correctly tracks all streaming tasks
  • Async::Variable elegantly handles first-chunk synchronization
  • Async::LimitedQueue maintains backpressure control

3. Test Coverage

The test updates properly reflect the architectural changes, switching from Fiber-based assertions to barrier/queue-based ones.


⚠️ Critical Issues

1. Missing Error Handling in Cache Callback

Location: react_on_rails_pro_helper.rb:310-316

@async_barrier.async do
  stream.each_chunk do |chunk|
    all_chunks << chunk if on_complete
    # ...
  end
  on_complete&.call(all_chunks)  # ⚠️ No error handling
end

Problem: If the on_complete callback (cache write) fails, it could crash the async task silently or leave the barrier in a bad state.

Recommendation:

begin
  on_complete&.call(all_chunks)
rescue => e
  Rails.logger.error("Stream on_complete callback failed: #{e.message}")
  # Consider re-raising or handling based on requirements
end

2. Memory Concern: Unbounded Array Growth

Location: react_on_rails_pro_helper.rb:309

all_chunks = [] if on_complete # Only collect if callback provided

Problem: For large component streams, collecting ALL chunks in memory could be problematic. While the main output queue is limited (backpressure), this array is unbounded.

Questions:

  • What's the expected max size of a component stream?
  • Should there be a size limit or warning for cache writes?

Recommendation: Consider adding a configuration limit or logging when chunk arrays exceed a threshold.

3. Race Condition in Empty Stream Handling

Location: react_on_rails_pro_helper.rb:327

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Problem: This check happens AFTER the each_chunk loop. If each_chunk executes asynchronously and completes before we check is_first, there could be a race where:

  1. each_chunk yields no chunks
  2. Task completes
  3. Main thread waits on first_chunk_var.wait forever

Recommendation: Set a timeout on the variable wait, or ensure the variable is initialized before starting the async task:

# Before the async task
first_chunk_var.value = nil  # Default value

@async_barrier.async do
  stream = yield
  is_first = true
  stream.each_chunk do |chunk|
    if is_first
      first_chunk_var.value = chunk
      is_first = false
    else
      # ...
    end
  end
end

🔍 Code Quality Issues

4. Inconsistent Variable Naming

  • @async_barrier vs @rorp_rendering_fibers
  • Consider using @streaming_barrier or @component_streaming_barrier for better semantic clarity

5. Missing Documentation

The new async architecture should be documented:

  • What thread safety guarantees exist?
  • What happens if stream_view_containing_react_components is called multiple times?
  • Are @async_barrier and @main_output_queue safe to reuse across requests?

Recommendation: Add YARD/RDoc comments explaining:

# Initializes async streaming primitives for concurrent component rendering.
# Must be called within a Sync block. Creates:
# - @async_barrier: Tracks all component streaming tasks
# - @main_output_queue: Buffers chunks with backpressure control
#
# @param template [String] The view template to render
# @param close_stream_at_end [Boolean] Whether to close the response stream
# @param render_options [Hash] Additional rendering options

6. Test Assertion Confusion

Location: react_on_rails_pro_helper_spec.rb:391-392

expect(initial_result).not_to include("More content", "Final content")
# NOTE: With async architecture, chunks are consumed in background immediately,
expect(chunks_read.count).to eq(3)

Problem: The comment suggests chunks are consumed immediately, but the test is checking that initial_result doesn't include future chunks. This seems inconsistent - are chunks consumed or not?

Recommendation: Clarify what this test is actually verifying. If the async task runs in the background, you may need explicit synchronization before asserting on chunks_read.count.


🏗️ Architecture Questions

7. Sync Block Scope

The entire stream_view_containing_react_components is now wrapped in Sync do.

Questions:

  • What's the performance impact of the Sync overhead?
  • Does this create a new event loop per request?
  • How does this interact with Rails threading model (Puma threads, etc.)?

Recommendation: Document the threading model implications, especially for high-concurrency scenarios.

8. Barrier Cleanup

@async_barrier.wait  # Wait for all tasks
@main_output_queue.close

Question: What happens if a task in the barrier fails? Does barrier.wait propagate exceptions? If so, does the queue get closed properly?

Recommendation: Add explicit error handling:

begin
  @async_barrier.wait
ensure
  @main_output_queue.close
  writing_task.wait
end

📊 Testing Gaps

9. Missing Test Cases

The PR should add tests for:

  1. Error handling: What happens when a component stream raises an exception?
  2. Empty streams: Verify behavior when stream.each_chunk yields nothing
  3. Cache callback failures: Test that on_complete errors are handled gracefully
  4. Concurrent performance: Verify multiple components stream concurrently as expected
  5. Memory bounds: Test behavior with very large streams (if applicable)

10. Cache Miss Test Update Needed

The handle_stream_cache_miss was significantly refactored to use on_complete callback, but I don't see explicit tests for this new code path.

Recommendation: Add a test verifying:

  • Cache write happens after all chunks are collected
  • Cache write failures don't break the stream
  • The callback receives all expected chunks

🎯 Performance Considerations

11. Double Collection of Chunks

When on_complete is provided:

  • Chunks are collected in all_chunks array
  • Chunks are also enqueued to @main_output_queue

This means chunks are stored in memory twice temporarily. Consider if this is necessary or if you can avoid double-buffering.


🔐 Security

No obvious security issues, but:

Recommendation: Ensure cache keys are properly namespaced/scoped to prevent cross-user data leakage if caching user-specific components.


📝 Documentation/Changelog

12. Missing Upgrade Guide

This is a significant internal architecture change. While backward compatible, developers debugging streaming issues need to know:

  • The new async-based architecture
  • How to debug barrier/queue issues
  • Performance characteristics

Recommendation: Add a section to CHANGELOG_PRO.md explaining the architectural shift and any debugging implications.


Pre-Merge Checklist

Before merging, please verify:

  • Run bundle exec rubocop and fix all violations (per CLAUDE.md)
  • Run existing streaming integration tests (rake run_rspec:dummy)
  • Test with multiple concurrent components
  • Test error scenarios (component raises exception)
  • Verify memory usage with large streams
  • Add documentation for new async architecture
  • Update CHANGELOG_PRO.md with user-facing changes (if any)

🎖️ Overall Assessment

This is a well-motivated refactor that simplifies the codebase and enables future async functionality. The core architecture is sound, but there are some critical issues around error handling, race conditions, and testing that should be addressed before merging.

Recommendation: Request Changes - Address the critical issues (error handling, race condition, test coverage) before merging.

Great work on the simplification! The async primitives are definitely the right direction. 🚀

@claude
Copy link

claude bot commented Nov 23, 2025

PR Review: Replace manual Fibers with Async gem primitives for component streaming

Summary

This PR refactors the component streaming implementation from manual Fiber management to async gem primitives (Async::Barrier, Async::Variable, Async::LimitedQueue). The primary motivation is to enable async gem code to run inside each_chunk blocks without triggering "Running scheduler on non-blocking fiber" errors.

Code Quality & Architecture

✅ Strengths

  1. Clear architectural improvement: The migration from manual Fibers to async primitives is well-motivated and makes the concurrency model more consistent.

  2. Significant code simplification: The PR reduces complexity by ~40 lines, with drain_streams_concurrently shrinking from 45 to 11 lines.

  3. Better separation of concerns: The on_complete callback pattern is cleaner than the previous wrapper-fiber approach for cache handling.

  4. Comprehensive test updates: Tests have been properly updated to work with the new async architecture.

Potential Issues & Concerns

🔴 CRITICAL: Missing Error Handling

Location: react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb:314-327

@async_barrier.async do
  stream.each_chunk do |chunk|
    all_chunks << chunk if on_complete

    if is_first
      first_chunk_var.value = chunk
      is_first = false
    else
      @main_output_queue.enqueue(chunk)
    end
  end

  first_chunk_var.value = nil if is_first
  on_complete&.call(all_chunks)
end

Problem: There's no error handling in the async task. If stream.each_chunk or on_complete raises an exception:

  • The exception may be swallowed until @async_barrier.wait is called
  • The first_chunk_var.wait could hang indefinitely if the exception occurs before the first chunk is set
  • Users would see a timeout rather than a meaningful error message

Recommendation:

@async_barrier.async do
  begin
    stream.each_chunk do |chunk|
      # ... existing logic ...
    end

    first_chunk_var.value = nil if is_first
    on_complete&.call(all_chunks)
  rescue => e
    # Ensure first_chunk_var is set even on error to unblock waiting code
    first_chunk_var.value = nil unless first_chunk_var.resolved?
    raise # Re-raise to propagate to barrier
  end
end

🟡 MEDIUM: Race Condition with Empty Streams

Location: Same location as above

first_chunk_var.value = nil if is_first

Problem: If stream.each_chunk completes immediately with no chunks (empty stream), there's a potential race. While Async::Variable should handle this correctly, the comment says "Handle case where stream has no chunks" but doesn't explain the expected behavior.

Recommendation: Add a test case for empty streams and document expected behavior.

🟡 MEDIUM: Sync Block Placement Change

Location: react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb:33-56

The Sync block now wraps render_to_string, which could execute arbitrary Rails code (partials, helpers, etc.). This means:

  1. Any blocking I/O in the template will run on an async fiber
  2. The entire template rendering is now part of the async context

Question: Has this been tested with complex templates that might do database queries, external API calls, or other blocking operations? The PR description doesn't mention testing this scenario.

🟡 MEDIUM: Behavioral Change (Not Just Refactoring)

Location: react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb

The test comment says "NOTE: With async architecture, chunks are consumed in background immediately" - this is a behavioral change, not just a refactoring. The old implementation had lazy evaluation (chunks consumed on-demand), the new one is eager (all chunks consumed immediately).

Question: What are the implications for:

  • Memory usage with large streams?
  • Backpressure handling?
  • Error handling mid-stream?

The PR claims "Same Functionality" but this appears to be a semantic difference.

Test Coverage

✅ Good Coverage

The PR updates tests comprehensively for the new async architecture.

❌ Missing Test Cases

  1. Error handling: No tests for exceptions during streaming
  2. Empty streams: No explicit test for zero-chunk streams
  3. Large streams: No test for memory behavior with many chunks
  4. Client disconnect: No test for connection drops mid-stream

Recommendation: Add tests for error scenarios:

it "handles errors during streaming gracefully" do
  allow(stream).to receive(:each_chunk).and_raise(StandardError, "Stream error")
  expect { stream_react_component(...) }.to raise_error(StandardError, "Stream error")
end

it "handles empty streams without hanging" do
  allow(stream).to receive(:each_chunk) # Never yields
  result = stream_react_component(...)
  expect(result).to be_nil
end

Compliance with CLAUDE.md

❌ Potential Issues

  1. Missing changelog entry: This is a significant architectural change that should be documented in CHANGELOG_PRO.md.

  2. Testing requirement: Per CLAUDE.md "CRITICAL - LOCAL TESTING REQUIREMENTS":

    • PR should clearly state whether tests were run locally
    • Should note: "Cannot test locally in isolated workspace" if applicable

Recommendations

Must Fix Before Merge

  1. Add error handling in run_stream_inside_fiber async block to prevent hangs on errors
  2. Add test for error scenarios during streaming
  3. Update CHANGELOG_PRO.md with this architectural change
  4. Document behavioral change from lazy to eager chunk consumption

Should Consider

  1. Add test for empty streams
  2. Document memory implications of all_chunks array
  3. Verify Sync block wrapping render_to_string doesn't break existing templates
  4. Add integration test with multiple concurrent components

Overall Assessment

Quality: Good - The refactoring simplifies the codebase and achieves the stated goal.

Completeness: Medium - Core functionality is there, but error handling and edge cases need work.

Risk Level: Medium - The behavioral change from lazy to eager evaluation could have unintended consequences.

Recommendation: Request changes - primarily adding error handling and tests for failure scenarios before merge.

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

I've reviewed this PR and overall it's a well-architected refactoring that successfully modernizes the concurrency model. Here's my detailed feedback:


✅ Strengths

1. Excellent Architectural Improvement

  • Replaces error-prone manual Fiber management with async gem primitives
  • Unified concurrency model makes the code more maintainable
  • The use of Async::Barrier, Async::Variable, and Async::LimitedQueue is appropriate

2. Significant Code Simplification

  • drain_streams_concurrently reduced from 45 lines to 11 lines (-75%)
  • Removed complex build_producer_tasks and build_writer_task methods
  • Clearer intent and flow

3. Proper Test Updates

  • Tests correctly updated to use async primitives
  • Good use of around blocks to provide async context
  • Tests verify both synchronous first chunk and async remaining chunks

4. Smart Caching Refactor

  • The on_complete callback approach is elegant
  • Cache writing moved from wrapper fiber to callback (cleaner separation)

⚠️ Issues & Recommendations

CRITICAL: Potential Resource Leak

Location: react_on_rails_pro_helper.rb:254-262

def handle_stream_cache_hit(component_name, raw_options, auto_load_bundle, cache_key, &_block)
  # ...
  initial_result, *rest_chunks = cached_chunks
  
  # Enqueue remaining chunks asynchronously
  @async_barrier.async do
    rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
  end
  
  # Return first chunk directly
  initial_result
end

Problem: If an exception occurs AFTER the first chunk is returned but BEFORE the async task completes enqueuing chunks, those chunks may be lost. The barrier might not properly wait for this task.

Recommendation:

# Ensure task is tracked before returning
task = @async_barrier.async do
  rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
end
# Consider: should we wait for first enqueue to complete before returning?
initial_result

MEDIUM: Error Handling Not Explicit

Location: stream.rb:33-46 and react_on_rails_pro_helper.rb:301-341

Issue: The new code doesn't make error propagation explicit:

  • What happens if @async_barrier.async block raises an error?
  • Will @async_barrier.wait propagate exceptions from child tasks?
  • What happens if @main_output_queue.enqueue blocks indefinitely?

Questions to verify:

  1. Does Async::Barrier#wait re-raise exceptions from child tasks?
  2. Is there a timeout for queue operations?
  3. What happens if one component fails while others are streaming?

Recommendation:

# Add explicit error handling example in comments or code
@async_barrier.async do
  begin
    stream.each_chunk { |chunk| @main_output_queue.enqueue(chunk) }
  rescue => e
    Rails.logger.error "Component streaming failed: \#{e.message}"
    raise # Re-raise to propagate to barrier
  end
end

MEDIUM: Race Condition in First Chunk Delivery

Location: react_on_rails_pro_helper.rb:314-341

first_chunk_var = Async::Variable.new
all_chunks = [] if on_complete

@async_barrier.async do
  stream = yield
  is_first = true
  
  stream.each_chunk do |chunk|
    all_chunks << chunk if on_complete
    
    if is_first
      first_chunk_var.value = chunk  # Set variable
      is_first = false
    else
      @main_output_queue.enqueue(chunk)
    end
  end
  
  first_chunk_var.value = nil if is_first  # Handle empty stream
  on_complete&.call(all_chunks)
end

first_chunk_var.wait  # Wait for value
first_chunk_var.value

Concern: There's a subtle race between setting first_chunk_var.value and the main thread calling first_chunk_var.wait. While Async::Variable should handle this correctly, it's worth confirming:

  1. Is Async::Variable#wait guaranteed to see the value even if set before wait is called?
  2. What happens if the async block completes before wait is called?

Recommendation: Add a comment explaining the synchronization guarantee:

# Async::Variable.wait blocks until value is set, even if set before wait is called
first_chunk_var.wait

LOW: Empty Stream Handling Could Be Clearer

Location: react_on_rails_pro_helper.rb:332

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Issue: Setting to nil for empty streams is correct but might be surprising.

Recommendation:

# Return nil for empty streams (callers should handle this case)
first_chunk_var.value = nil if is_first

LOW: Test Comments Could Be More Precise

Location: react_on_rails_pro_helper_spec.rb:391-392

# NOTE: With async architecture, chunks are consumed in background immediately,
expect(chunks_read.count).to eq(3)

Issue: Comment is incomplete (ends with comma) and could better explain WHY all chunks are consumed immediately.

Better comment:

# NOTE: With async architecture, all chunks are consumed in the background immediately
# during the async task execution, unlike the old fiber approach where chunks were
# consumed lazily via fiber.resume calls
expect(chunks_read.count).to eq(3)

🔍 Performance Considerations

Async Overhead

The new approach creates an async task for EVERY component, even if it only has one chunk. This might have slight overhead vs. lazy fiber resumption.

Question: Have you benchmarked single-chunk components? The first chunk is still synchronous which is good, but task creation overhead might matter at scale.

Queue Memory Usage

Async::LimitedQueue with configurable buffer size is excellent for backpressure. Good design! ✅


🔒 Security Considerations

No New Security Issues

The refactoring doesn't introduce security vulnerabilities. Good! ✅

Resource Exhaustion

The buffer size limit prevents unbounded memory growth. ✅


🧪 Test Coverage

Good Coverage Overall

  • ✅ Tests verify async context setup
  • ✅ Tests verify synchronous first chunk
  • ✅ Tests verify async remaining chunks
  • ✅ Tests verify whitespace preservation
  • ✅ Cache tests updated appropriately

Missing Test Cases

Consider adding tests for:

  1. Error handling: What happens when each_chunk raises an error?
  2. Empty streams: Verify nil return for empty streams
  3. Concurrent component streaming: Test multiple components streaming simultaneously
  4. Queue full scenario: Test backpressure when queue fills up

📝 Code Quality & Best Practices

Follows Project Conventions

  • Proper use of frozen_string_literal: true
  • RuboCop violations addressed (disabled only where needed)
  • Code formatting consistent

Documentation Quality

  • PR description is excellent - very clear before/after comparison
  • Inline comments could be improved (see specific notes above)

🚀 Recommendations Before Merge

High Priority:

  1. ✅ Verify error propagation behavior with async gem documentation
  2. ✅ Add test for error handling in each_chunk
  3. ✅ Clarify/document the synchronization guarantees of Async::Variable

Medium Priority:

  1. Consider adding explicit error handling in async blocks
  2. Add test for empty stream case
  3. Improve incomplete comment in test file

Low Priority:

  1. Benchmark single-chunk component performance
  2. Add integration test with multiple concurrent components

✨ Overall Assessment

LGTM with minor concerns addressed. This is a solid refactoring that:

  • ✅ Solves the core problem (enables async code in each_chunk)
  • ✅ Simplifies the codebase significantly
  • ✅ Maintains backward compatibility
  • ✅ Has good test coverage

The main concerns are around error handling clarity and resource cleanup guarantees. I recommend addressing the critical resource leak concern and adding error handling tests before merging.

Confidence Level: High - the architecture is sound, just needs minor refinement in error handling.

Great work on this refactoring! 🎉

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

All CI checks are passing. Here is my detailed review of PR #2111.

Architecture and Design - Excellent Improvements

1. Unified Concurrency Model

  • Replacing manual Fiber.new with Async::Barrier is the right approach for running async code inside streaming contexts
  • The architecture shift from manual fiber management to async primitives is clean and well-motivated
  • Using Async::Variable for first-chunk synchronization is elegant and idiomatic

2. Simplified Code Structure

  • 40 lines removed while maintaining functionality is impressive
  • drain_streams_concurrently went from 45 to 11 lines with clearer intent
  • Removed complexity: build_producer_tasks and build_writer_task no longer needed

3. Callback-Based Cache Writes

  • Moving from wrapper fibers to on_complete callback is much cleaner
  • handle_stream_cache_miss is now declarative rather than imperative

Potential Issues

CRITICAL: Empty Stream Handling

In run_stream_inside_fiber at react_on_rails_pro_helper.rb:313-337, if stream.each_chunk yields zero chunks, first_chunk_var.wait will block forever because the variable is never set.

Root cause: The line first_chunk_var.value = nil if is_first only runs AFTER each_chunk completes. If each_chunk blocks or returns without yielding, we deadlock.

Fix: Track whether any chunks were yielded with has_chunks variable, set to false initially, true on first chunk, then check after loop.

MODERATE: Initialization Order

Instance variables async_barrier and main_output_queue are set before render_to_string, which is correct. Worth documenting this ordering requirement prominently since helpers depend on these being initialized first.

Suggestion: Add comment explaining that async primitives must be initialized BEFORE render_to_string.

MINOR: Error Propagation

Verify: Does the writing task handle exceptions from response.stream.write? What if the client disconnects mid-stream?

Testing - Good Coverage

Tests were properly updated:

  • around blocks wrap tests in Sync do
  • Tests properly wait for async_barrier.wait
  • Tests close queue and drain remaining chunks

Note behavior change: Before tests explicitly controlled chunk consumption via fiber.resume. Now all chunks consumed immediately in background. This is fine but consider if you have integration tests verifying actual streaming behavior over time.

Performance

Positive:

  • Async::LimitedQueue provides backpressure via buffer_size limit
  • First chunk still returned synchronously for fast perceived load

Consideration:

  • Async::Variable.wait is blocking. Should be fine in controller action but verify no thread starvation under high concurrency.

Suggestions

  1. Add detailed comments to run_stream_inside_fiber explaining the flow
  2. Consider timeout protection for first_chunk_var.wait
  3. Add explicit test for empty stream case
  4. Add error handling for on_complete callback

Summary

This is a well-architected refactoring achieving its primary goal of enabling async gem code inside component streaming.

Strengths:

  • Clear motivation
  • Cleaner code (-40 lines)
  • Tests updated appropriately
  • CI passing

Action Items (priority order):

  1. CRITICAL: Fix empty stream deadlock scenario
  2. IMPORTANT: Add timeout protection for first_chunk_var.wait
  3. RECOMMENDED: Add explicit empty stream test case
  4. Nice-to-have: Documentation comments and on_complete error handling

Overall: Strong PR - just needs empty stream edge case addressed before merging.

@claude
Copy link

claude bot commented Nov 23, 2025

PR #2111 Review: Replace manual Fibers with Async gem primitives

Overall Assessment: ✅ Strong Refactoring

This PR successfully modernizes the concurrent streaming implementation by replacing manual Fiber management with async gem primitives. The changes are well-motivated, properly tested, and significantly simplify the codebase.

✅ Strengths

1. Clear Problem & Solution

  • Well-documented motivation: enabling async gem code within each_chunk
  • The "Running scheduler on non-blocking fiber" error is a real constraint
  • Solution correctly addresses the root cause

2. Code Quality Improvements

  • -40 net lines of code (60 deleted, 20 added)
  • Eliminated complex build_producer_tasks and build_writer_task methods
  • Simplified drain_streams_concurrently from 45 lines to 11 lines
  • Better separation of concerns with on_complete callback

3. Excellent Test Coverage

  • Updated all affected tests to use new async primitives
  • Tests now properly simulate the async context with Sync do blocks
  • Maintained test coverage for edge cases (whitespace preservation, empty streams)
  • Added proper cleanup (@async_barrier.wait, @main_output_queue.close)

4. Backward Compatibility

  • Same public API (no breaking changes for users)
  • Preserved all existing functionality:
    • Concurrent component streaming ✅
    • Backpressure via LimitedQueue
    • First chunk synchronous delivery ✅
    • Cache hit/miss handling ✅

🔍 Code Quality Observations

1. Error Handling - Good

# Helper initialization check
if @async_barrier.nil?
  raise ReactOnRails::Error,
        "You must call stream_view_containing_react_components..."
end

✅ Proper guard clause to catch misuse early

2. Resource Management - Good

begin
  drain_streams_concurrently(parent_task)
ensure
  response.stream.close if close_stream_at_end
end

✅ Ensures streams are properly closed even on errors

3. Cache Integration - Excellent

The new on_complete callback design is much cleaner than the wrapper fiber approach:

Before (complex):

wrapper_fiber = Fiber.new do
  while (chunk = original_fiber.resume)
    buffered_chunks << chunk
    Fiber.yield(chunk)
  end
  Rails.cache.write(view_cache_key, buffered_chunks, ...)
end

After (clean):

on_complete: lambda { |chunks|
  Rails.cache.write(view_cache_key, chunks, ...)
}

⚠️ Potential Concerns & Questions

1. Performance: Async::Variable Blocking (Minor)

# Wait for and return the first chunk (blocking)
first_chunk_var.wait
first_chunk_var.value

Question: Does Async::Variable#wait properly integrate with the async scheduler, or does it block the entire thread? If it blocks the thread, this could impact concurrency.

Recommendation: Verify that this wait is non-blocking at the fiber level. The async gem documentation should confirm this.

2. Memory Usage: Chunk Collection (Minor)

all_chunks = [] if on_complete  # Only collect if callback provided
# ...
stream.each_chunk do |chunk|
  all_chunks << chunk if on_complete

Observation: When caching is enabled, all chunks are now buffered in memory in addition to being sent to the queue. This is acceptable for typical use cases but could be problematic for extremely large streams.

Recommendation: Document this behavior or consider if there's a way to avoid double-buffering.

3. Empty Stream Handling (Good Catch)

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

✅ Good defensive programming! Empty streams are handled correctly.

4. Instance Variable Lifecycle

@async_barrier = Async::Barrier.new
@main_output_queue = Async::LimitedQueue.new

Question: Are these instance variables properly scoped to the request lifecycle? In a threaded environment, could there be any risk of these leaking across requests?

Observation: Since they're initialized in Sync do block during stream_view_containing_react_components, they should be request-scoped. But worth confirming that the controller instance is not reused.

5. Error Propagation

@async_barrier.wait  # Wait for all tasks to complete

Question: If a component streaming task raises an exception inside @async_barrier.async, will it be properly propagated when calling @async_barrier.wait?

Recommendation: Add a test case that verifies error handling:

it "propagates errors from streaming tasks" do
  allow(stream).to receive(:each_chunk).and_raise("Streaming error")
  expect {
    stream_react_component(component_name, props: props)
    @async_barrier.wait
  }.to raise_error(/Streaming error/)
end

🔐 Security Considerations

✅ No security concerns identified. The changes are internal refactoring without new user input handling or external data processing.

🧪 Test Coverage Assessment

Test Updates: Excellent

  1. Proper async context setup with around blocks
  2. Updated expectations to match new async behavior
  3. Comprehensive scenarios:
    • Basic streaming ✅
    • Whitespace preservation ✅
    • Cache hits ✅
    • Cache misses ✅
    • Empty streams ✅ (implicit)

Suggested Additional Tests

  1. Error propagation test (see Add Unit tests #5 above)
  2. Race condition test: Multiple components streaming simultaneously
  3. Backpressure test: Verify LimitedQueue capacity is respected

📝 Documentation & Changelog

PR Description: ✅ Excellent

  • Clear problem statement
  • Architecture comparison (before/after)
  • Benefits enumerated
  • Testing checklist provided

Missing: Changelog Entry

⚠️ According to CLAUDE.md, this should have an entry in /CHANGELOG_PRO.md since it's a Pro feature change.

Recommendation: Add changelog entry:

#### Improvements

[PR 2111](https://github.com/shakacode/react_on_rails/pull/2111) by [AbanoubGhadban](https://github.com/AbanoubGhadban)

Refactored component streaming to use Async gem primitives instead of manual Fiber management. This enables running async code within streaming components (prerequisite for async props support) and simplifies the codebase by 40 lines.

🎯 Recommendations

Must Address Before Merge:

  1. ✅ Add /CHANGELOG_PRO.md entry (user-facing improvement)

Should Investigate:

  1. Verify Async::Variable#wait is non-blocking at fiber level
  2. Confirm instance variable scoping in threaded environments
  3. Add error propagation test case

Nice to Have:

  1. Document the chunk buffering behavior when on_complete is used
  2. Consider adding integration test for the "async code in each_chunk" use case

✅ Final Verdict

Approve with minor recommendations.

This is a well-executed refactoring that:

  • Solves a real technical constraint
  • Improves code quality significantly
  • Maintains backward compatibility
  • Has comprehensive test coverage
  • Passes all CI checks

The concerns raised are minor and mostly investigative. The only required action is adding a changelog entry per project conventions.

Great work! 🎉

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace Manual Fibers with Async Gem Primitives

Great work on this refactoring! The shift from manual Fiber management to async gem primitives is a solid architectural improvement.


✅ Strengths

1. Clear Architecture

  • The barrier-based async flow is much cleaner than manual Fiber juggling
  • Excellent code reduction: -40 lines total, simplified drain_streams_concurrently (45→11 lines)
  • Outstanding PR description providing context

2. Good Error Handling

  • ensure blocks properly maintained for cleanup
  • Error propagation through async barriers should work correctly
  • Proper sequencing: barrier.wait → queue.close → drain loop

3. Test Coverage

  • Tests updated to reflect new async architecture
  • Good use of around blocks to provide async context
  • Tests verify both synchronous first chunk delivery and async remainder

🔍 Potential Issues

1. CRITICAL: Race Condition Risk in First Chunk Delivery

In run_stream_inside_fiber, the code calls first_chunk_var.wait BEFORE the async task has started executing. This creates a potential race where if stream.each_chunk hasn't yielded the first chunk yet, wait might block indefinitely.

Why this might work in tests but fail in production: Tests may have very fast synchronous streams. Production streams might have network delays or complex rendering.

Recommendation: Verify that Async::Variable#wait blocks correctly until a value is set (according to async gem docs it should). Consider adding a timeout as a safeguard.

2. Error Handling: What if stream.each_chunk Raises?

If each_chunk raises an exception:

  • first_chunk_var may never be set → wait hangs forever
  • on_complete callback won't be called → cache writes fail silently
  • all_chunks cleanup doesn't happen

Recommended fix: Add rescue block to ensure first_chunk_var is set even on error, and re-raise to propagate to barrier.

3. Memory Management: all_chunks Array

For large streams with many chunks, the all_chunks array accumulates all chunks in memory simultaneously when caching is enabled.

Recommendation: Add documentation about memory implications of caching large components. Consider if there should be a configurable chunk limit.

4. Missing on_complete Documentation

The on_complete callback is a new feature but lacks:

  • Public API documentation
  • Example usage outside of caching
  • Callback signature documentation
  • Thread-safety guarantees

Recommendation: Add YARD docs with examples.

5. Testing Gap: Edge Cases

Great coverage overall, but missing:

  • Empty stream test (what if stream.each_chunk yields nothing?)
  • Error during streaming test (what if a chunk raises?)
  • Large stream test (1000+ chunks)
  • on_complete error test (what if callback raises?)

📊 Performance Considerations

Memory Usage Increased:

  • Before: Manual fibers resume one chunk at a time (no accumulation)
  • After: all_chunks array holds all chunks in memory when caching enabled

Impact: For cached components, memory usage is now O(n) where n = total chunk size.

Verdict: Performance tradeoff is acceptable but should be documented.


✅ Final Verdict

Approve with minor fixes recommended

Must Fix Before Merge:

  1. Verify Async::Variable#wait race condition is safe (or add timeout/safeguard)
  2. Add error handling to prevent first_chunk_var.wait from hanging forever
  3. Add on_complete documentation (YARD comments minimum)

Should Fix Before Merge:

  1. Add error handling to on_complete callback in cache writer
  2. Add test for empty stream edge case
  3. Add test for error during streaming

Nice to Have:

  1. Memory usage documentation for caching
  2. Performance testing with multiple concurrent components

Overall: Excellent refactoring that achieves its goals. The architecture is cleaner and enables future async props work. Issues identified are mostly edge cases and documentation gaps, not fundamental flaws. With suggested fixes, this PR will be solid.

Great work! 🚀

@claude
Copy link

claude bot commented Nov 23, 2025

PR Review: Replace Manual Fibers with Async Gem Primitives

Great work on this refactoring! This is a well-structured PR that successfully replaces manual Fiber management with async gem primitives. The code is cleaner, more maintainable, and enables the critical feature of running async code inside streaming components.


✅ Strengths

1. Clear Architecture & Motivation

  • Excellent PR description explaining the "why" (enabling async code in each_chunk)
  • Good before/after comparison showing the architectural shift
  • Well-documented commit progression showing incremental changes

2. Significant Code Simplification

  • -40 lines net reduction (60 deleted, 20 added)
  • drain_streams_concurrently simplified from 45 lines to 11 lines
  • Removed complex methods: build_producer_tasks and build_writer_task
  • Cleaner separation of concerns

3. Unified Concurrency Model

  • All concurrency now managed by async gem (no manual Fiber juggling)
  • Consistent patterns: Async::Barrier, Async::Variable, Async::LimitedQueue
  • Easier to reason about execution flow

4. Clean Callback Pattern

  • The on_complete callback is elegant and explicit
  • Much better than the previous @rorp_cache_buffering instance variable approach
  • Memory-efficient (only collects chunks when callback provided)

5. Comprehensive Test Updates

  • All tests updated to match new architecture
  • Tests properly wrapped in Sync blocks
  • Good use of around blocks for test setup

🔍 Code Quality Observations

Potential Issues & Questions

1. Race Condition in drain_streams_concurrently ⚠️

def drain_streams_concurrently(parent_task)
  writing_task = parent_task.async do
    while (chunk = @main_output_queue.dequeue)
      response.stream.write(chunk)
    end
  end

  @async_barrier.wait  # Wait for producers
  @main_output_queue.close  # Signal end
  writing_task.wait  # Wait for writer
end

Concern: There's a potential race between:

  • Writer task calling dequeue and getting nil (queue closed)
  • Barrier completing but queue not yet closed

Question: What happens if a producer enqueues a chunk after barrier.wait returns but before queue.close? Could this lead to lost chunks?

Recommendation: Consider:

# Option 1: Close queue inside barrier tasks
@async_barrier.async do
  stream.each_chunk { |c| @main_output_queue.enqueue(c) }
ensure
  @main_output_queue.close if last_task?  # If this is implementable
end

# Option 2: Use ensure block
begin
  @async_barrier.wait
ensure
  @main_output_queue.close
end

2. Error Propagation 🤔

@async_barrier.async do
  stream.each_chunk do |chunk|
    # What if this raises an error?
    @main_output_queue.enqueue(chunk)
  end
end

Question: How are errors in streaming tasks handled?

  • Does Async::Barrier#wait propagate exceptions from tasks?
  • Will the writer task continue draining if a producer fails?
  • Should there be explicit error handling?

Previous code had this comment:

# This structure ensures that even if a producer task fails, we always
# signal the writer to stop and then wait for it to finish draining

Recommendation: Verify that error propagation still works correctly. Consider adding:

begin
  drain_streams_concurrently(parent_task)
rescue => e
  Rails.logger.error("Streaming error: #{e.message}")
  raise
ensure
  response.stream.close if close_stream_at_end
end

3. Memory Safety with on_complete 📊

all_chunks = [] if on_complete  # Only collect if callback provided
stream.each_chunk do |chunk|
  all_chunks << chunk if on_complete
  # ...
end

Good: Memory-efficient, only collects when needed.

Question: For very large streams (many components, large chunks), could the all_chunks array grow too large before being written to cache?

Recommendation: Consider documenting the memory implications or adding a config option for max cache size.

4. Empty Stream Handling

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Good: Explicit handling of empty streams.

Question: What should the behavior be if a component produces no chunks? Should this be an error or valid?


🧪 Testing Concerns

1. Test Behavior Changes

-        expect(chunks_read.count).to eq(1)
+        # NOTE: With async architecture, chunks are consumed in background immediately,
+        expect(chunks_read.count).to eq(3)

Concern: This is a fundamental behavior change in the tests. The old architecture returned 1 chunk synchronously, the new one appears to consume all 3 immediately.

Questions:

  • Is this the intended behavior?
  • Does this affect production behavior (e.g., when components stream)?
  • Could this indicate a timing issue in the test setup?

Recommendation: Verify that in production:

  • First chunk is still returned synchronously (good for TTFB)
  • Remaining chunks are truly streamed asynchronously
  • Add integration tests to verify actual streaming behavior (not just unit tests)

2. Test Isolation ⚠️

# rubocop:disable RSpec/InstanceVariable

Concern: Using instance variables in tests can lead to state leakage between examples.

Recommendation: While the around block helps, consider using let or explicit setup methods to make dependencies clearer.


🔒 Security & Performance

Performance

Good:

  • Concurrent streaming maintained
  • Backpressure via LimitedQueue preserved
  • First chunk synchronous delivery preserved (good TTFB)

⚠️ Consider:

  • Benchmark the new implementation vs old to verify no regression
  • Profile memory usage with large component streams

Security

Good: No obvious security issues introduced.


📝 Documentation & Style

1. Comments

  • Good inline comments explaining async primitives
  • Clear lifecycle comments ("Wait for all tasks", "Close queue")

2. Code Style

  • Follows Ruby conventions
  • RuboCop passing
  • Consistent formatting

3. Missing Documentation

Recommendation: Add documentation for the on_complete callback:

# @param on_complete [Proc, nil] Optional callback called after streaming completes.
#   Receives an array of all chunks. Used for caching.
# @return [String] The first chunk (initial HTML)
def run_stream_inside_fiber(on_complete: nil)

🎯 Recommendations Summary

Critical

  1. Verify race condition handling in drain_streams_concurrently
  2. Test error propagation - ensure errors in streaming tasks are properly handled
  3. Verify production streaming behavior - ensure first chunk truly synchronous

Important

  1. 📚 Add YARD documentation for on_complete parameter
  2. 🧪 Add integration tests to verify actual streaming behavior (not just unit tests)
  3. 📊 Consider memory implications of all_chunks array for large streams

Nice to Have

  1. 🔍 Benchmark performance vs old implementation
  2. 🧹 Consider refactoring tests to avoid instance variable usage

✅ Final Verdict

LGTM with minor concerns to address

This is a solid refactoring that achieves its goals:

  • ✅ Enables async code in streaming components
  • ✅ Simplifies code significantly (-40 lines)
  • ✅ Maintains existing functionality
  • ✅ All tests passing

The concerns raised are mostly about ensuring robustness (error handling, race conditions) and verifying production behavior matches test expectations.

Suggested next steps:

  1. Address race condition concern in drain_streams_concurrently
  2. Verify error propagation works correctly
  3. Add integration test to verify first-chunk-synchronous behavior in production
  4. Consider adding YARD docs for the new on_complete parameter

Great work overall! The code is much cleaner and the architecture is easier to understand. 🎉


Review by Claude Code 🤖

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Async Gem Primitives for Component Streaming

Summary

This is a well-architected refactor that successfully replaces manual Fiber management with async gem primitives. The changes enable async code execution within streaming while maintaining all existing functionality. All CI checks are passing ✅


✅ Strengths

1. Clear Architecture Improvement

  • Unified concurrency model: All fibers now managed by async gem
  • Simpler code: -40 lines total, -34 lines in stream.rb alone
  • Better separation of concerns: Async::Barrier for task coordination, Async::Variable for synchronous first-chunk delivery, Async::LimitedQueue for backpressure

2. Proper Test Updates

  • Tests correctly updated to use Sync blocks and async primitives
  • Test setup properly mimics production environment with @async_barrier and @main_output_queue
  • Good use of around blocks to wrap tests in async context

3. Backwards Compatibility

  • Cache hit/miss logic preserved correctly
  • First chunk still delivered synchronously (critical for performance)
  • Error propagation maintained
  • Backpressure via LimitedQueue preserved

4. Smart Callback Design

The on_complete callback pattern in handle_stream_cache_miss is elegant:

cache_aware_options = raw_options.merge(
  on_complete: lambda { |chunks|
    Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
  }
)

This is much cleaner than the previous wrapper fiber approach.


🔍 Potential Issues & Questions

1. Require Statements Location ⚠️

Issue: require statements are inside methods rather than at the top of files.

Current:

def stream_view_containing_react_components(...)
  require "async"
  require "async/barrier"
  require "async/limited_queue"
  # ...
end

def run_stream_inside_fiber(on_complete:)
  require "async/variable"
  # ...
end

Recommendation: Move requires to the top of the file or use lazy loading if there's a specific reason (e.g., optional dependency). Inline requires:

  • Reduce code clarity
  • Can cause confusion about dependencies
  • Execute on every method call (though Ruby caches them)

Suggested fix:

# At top of react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
require "async"
require "async/barrier"
require "async/limited_queue"

# At top of react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
require "async/variable"

2. Empty Stream Edge Case

Observation: The code handles empty streams:

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Question: What happens if a component yields zero chunks? Will this cause issues downstream? Consider adding a test case for this scenario.

3. Error Handling in Async Tasks

Current: Error propagation relies on async gem's built-in behavior.

Question: What happens if:

  • A component's each_chunk raises an exception?
  • The on_complete callback fails?
  • Multiple components fail concurrently?

Recommendation: Add explicit error handling tests or document expected behavior. Consider:

@async_barrier.async do
  begin
    stream.each_chunk { |chunk| ... }
    on_complete&.call(all_chunks)
  rescue => e
    Rails.logger.error("Component streaming failed: #{e.message}")
    raise
  end
end

4. Memory Considerations

Observation: all_chunks = [] if on_complete buffers chunks in memory.

Question: For large components with many chunks, this could be memory-intensive. Is there a size limit? Should there be?

Recommendation: Document expected memory usage or add configuration for max buffer size.


🎯 Best Practices Observations

1. Good: Minimal API Surface Changes

The public API (stream_react_component, cached_stream_react_component) remains unchanged except for the optional on_complete parameter. Excellent!

2. Good: Test Coverage

Tests updated comprehensively, including:

  • First chunk synchronous delivery
  • Subsequent chunk async delivery
  • Cache hit/miss scenarios
  • Whitespace preservation

3. Consider: Documentation

The PR description is excellent, but consider adding inline comments for complex async flows, especially:

  • Why Sync must wrap the entire stream_view_containing_react_components method
  • The contract for on_complete callbacks
  • Thread-safety guarantees

🔐 Security Considerations

No obvious security issues. The changes:

  • Don't introduce user input handling
  • Don't change authentication/authorization
  • Maintain same queue size limits (backpressure)
  • Use standard Ruby/async gem primitives

🚀 Performance Considerations

Positive:

Simpler code path should be slightly faster
Backpressure maintained via LimitedQueue
First chunk still synchronous - no TTFB regression

Questions:

  • Async::Variable overhead: Is first_chunk_var.wait more expensive than Fiber.resume? Likely negligible, but worth profiling.
  • Memory: Buffering all_chunks for cache callbacks uses more memory than the previous streaming approach.

📋 Recommendations Before Merge

High Priority:

  1. Move require statements to top of files (or document why they're inline)
  2. Add error handling test for component streaming failures

Medium Priority:

  1. Document on_complete callback contract in code comments
  2. Add test for empty stream (zero chunks) edge case
  3. Document memory implications of chunk buffering for caching

Low Priority:

  1. Consider adding performance benchmarks comparing old vs. new approach
  2. Add inline comments explaining async flow for future maintainers

✅ Final Verdict

LGTM with minor suggestions 🎉

This is a solid refactor that achieves its primary goal: enabling async gem code execution within component streaming. The code is cleaner, tests are comprehensive, and CI is green. Address the inline require statements and consider the error handling suggestions, and this is ready to merge.

Risk Level: Low - Changes are well-tested and backwards compatible.

Great work on the architecture redesign! 🚀


References

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

This is a well-architected refactor that successfully transitions from manual Fiber management to async gem primitives. Here's my detailed feedback:


✅ Strengths

1. Clear Problem Statement & Architecture

The PR description excellently documents the problem ("Running scheduler on non-blocking fiber") and shows before/after architecture. This makes the change easy to review and understand.

2. Unified Concurrency Model

Replacing manual Fibers with Async::Barrier, Async::Variable, and Async::LimitedQueue creates a more maintainable and idiomatic async solution. This enables future async gem code in each_chunk without errors.

3. Code Simplification

  • -40 lines total (60 deleted, 20 added)
  • drain_streams_concurrently reduced from 45 lines to 11 lines
  • Removed build_producer_tasks and build_writer_task helper methods

4. Test Coverage Updates

Tests were properly updated to work with the new async primitives:

  • Added around blocks to wrap tests in Sync context
  • Updated assertions to account for async chunk consumption
  • Added proper barrier waiting and queue draining

5. Cache Implementation Improvement

The on_complete callback approach for cache miss is much cleaner than the old wrapper fiber pattern:

# Before: Complex wrapper fiber that buffers and yields chunks
wrapper_fiber = Fiber.new do
  while (chunk = original_fiber.resume)
    buffered_chunks << chunk
    Fiber.yield(chunk)
  end
  Rails.cache.write(view_cache_key, buffered_chunks, raw_options[:cache_options] || {})
end

# After: Clean callback-based approach
on_complete: lambda { |chunks|
  Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
}

🔍 Observations & Questions

1. Sync do Block Placement

The Sync do block now wraps the entire stream_view_containing_react_components method. This is correct for the architecture, but worth noting:

  • Benefit: All component async tasks start during view rendering (as described in PR)
  • Trade-off: The entire method execution is now synchronous at the outer level (blocking until all streaming completes)

Question: Is this the intended behavior? Should the method return immediately and stream in the background, or block until complete?

2. Empty Stream Handling

Good catch handling the edge case:

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

3. Queue Type Inconsistency in Tests

In tests, you're using Async::Queue.new instead of Async::LimitedQueue.new:

# In tests
@main_output_queue = Async::Queue.new

# In production code
@main_output_queue = Async::LimitedQueue.new(buffer_size)

Suggestion: Consider using LimitedQueue in tests with a large buffer to match production behavior more closely. This could help catch backpressure-related bugs.

4. Error Propagation

The old code had explicit error handling structure:

begin
  tasks.each(&:wait)
ensure
  queue.close
  writer.wait
end

The new code relies on async gem's error propagation:

@async_barrier.wait  # Will propagate any task errors
@main_output_queue.close
writing_task.wait

This is correct - async gem propagates exceptions from child tasks to parent. However, consider adding a comment or integration test demonstrating error handling behavior.


🐛 Potential Issues

1. RuboCop Disable Comment

The PR adds # rubocop:disable RSpec/InstanceVariable to the entire test file:

# rubocop:disable RSpec/InstanceVariable
# ... 800+ lines ...
# rubocop:enable RSpec/InstanceVariable

Issue: Disabling RuboCop for the entire file is too broad. Instance variables in tests (@async_barrier, @main_output_queue) are necessary to simulate the controller context.

Suggestion: Use more targeted disabling:

around do |example|
  Sync do
    # rubocop:disable RSpec/InstanceVariable
    @async_barrier = Async::Barrier.new
    @main_output_queue = Async::Queue.new
    # rubocop:enable RSpec/InstanceVariable
    
    example.run
  end
end

Or consider extracting a test helper method to avoid the violation entirely.


🚀 Performance Considerations

Positive:

  • Backpressure maintained: LimitedQueue prevents memory bloat
  • Concurrent streaming preserved: All components stream concurrently via barrier

Minor Concern:

  • Async::Variable adds a blocking wait for first chunk. This should be negligible, but worth profiling in production if first-paint performance is critical.

🔒 Security Considerations

No security concerns identified. The change is purely architectural and doesn't introduce:

  • New user inputs
  • External API calls
  • Authentication/authorization changes
  • Data exposure risks

📋 Test Coverage Assessment

✅ Well-Covered:

  • First chunk synchronous delivery
  • Subsequent chunks in queue
  • Whitespace preservation
  • Cache hit/miss scenarios
  • Multiple concurrent components

🟡 Could Add:

  1. Error propagation test: What happens when a component stream throws an error?
  2. Empty stream test: Explicit test for streams with zero chunks
  3. Backpressure test: Verify behavior when queue fills up (though this may be integration-level)

📝 Documentation & Comments

Good:

  • PR description is excellent and should be preserved in commit history
  • Code comments explain non-obvious behavior (e.g., first chunk variable)

Suggestion:

Add a brief comment in stream_view_containing_react_components explaining why Sync do wraps the entire method:

# Wrap in Sync block to establish async context before view rendering.
# Components will start streaming immediately during render_to_string.
Sync do |parent_task|

✅ Verdict

Approve with minor suggestions.

This is a solid architectural improvement that:

  • ✅ Solves the stated problem (enables async code in streaming)
  • ✅ Simplifies the codebase significantly
  • ✅ Maintains all existing functionality
  • ✅ Has appropriate test coverage
  • ✅ CI passing

The only actionable item is the overly-broad RuboCop disable. Everything else is minor polish.


🎯 Pre-Merge Checklist

Based on CLAUDE.md requirements:

  • ✅ CI passing (all checks green)
  • ✅ RuboCop compliance (though consider narrowing disable scope)
  • ✅ Test coverage updated
  • ⚠️ CHANGELOG.md: Should this be added? It's a significant internal refactor that enables future async props feature
  • ✅ Pro package (changes are in react_on_rails_pro/)

Recommendation: Consider adding a CHANGELOG_PRO.md entry under "Internal" or "Refactor" category, mentioning this enables upcoming async props support.


Great work on this refactor! The async primitive approach is much cleaner and more maintainable. 🚀

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Async Gem Primitives Migration

I've reviewed this PR that replaces manual Fiber management with Async gem primitives. Overall, this is a well-architected refactoring that achieves its stated goals. Here's my detailed feedback:


Strengths

1. Clear Problem & Solution

The PR description excellently articulates:

  • Problem: Manual Fibers prevent running async gem code in each_chunk
  • Solution: Use Async::Barrier, Async::Variable, and Async::LimitedQueue
  • Architecture: Clear before/after diagrams showing the transition

2. Code Simplification

  • -40 lines net (60 deleted, 20 added)
  • stream.rb: Reduced from 45 lines to 11 lines in drain_streams_concurrently
  • Removed complex build_producer_tasks and build_writer_task methods
  • Much clearer intent and flow

3. Excellent Test Coverage

  • Tests properly updated to use barrier/queue primitives
  • around blocks correctly wrap tests in Sync context
  • Tests verify both synchronous first-chunk and asynchronous remaining chunks
  • Proper async lifecycle management with barrier.wait and queue.close

4. Backward Compatibility Preserved

All existing functionality maintained:

  • ✅ Concurrent component streaming
  • ✅ Backpressure via LimitedQueue
  • ✅ Error propagation
  • ✅ First chunk synchronous delivery

5. Smart Caching Integration

The on_complete callback pattern for cache writes is elegant:

on_complete: lambda { |chunks|
  Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
}

This is cleaner than the previous wrapper fiber approach.


🔍 Potential Issues & Suggestions

1. Error Handling & Resource Cleanup ⚠️ CRITICAL

Issue: If an async task raises an exception, what happens to:

  • The @main_output_queue?
  • The @async_barrier?
  • The HTTP response stream?

Current code in drain_streams_concurrently:

begin
  drain_streams_concurrently(parent_task)
ensure
  response.stream.close if close_stream_at_end
end

Questions:

  1. Does @async_barrier.wait propagate exceptions from child tasks?
  2. If a component streaming task fails, will @main_output_queue.close still be called?
  3. Should there be a rescue clause to handle streaming errors gracefully?

Suggestion: Add explicit error handling:

begin
  @async_barrier.wait
rescue => e
  Rails.logger.error "Component streaming failed: \#{e.message}"
  # Optionally write error indicator to stream
ensure
  @main_output_queue.close
  writing_task.wait
end

2. Empty Stream Edge Case

Observation: The code handles empty streams:

# Handle case where stream has no chunks
first_chunk_var.value = nil if is_first

Question: What happens if a component returns an empty stream?

  • Does first_chunk_var.wait return nil?
  • Does the view still render correctly with a nil chunk?
  • Should this log a warning or raise an error?

Suggestion: Add a test case for empty stream behavior.

3. Memory Considerations

Issue: When using on_complete callback, all chunks are buffered in memory:

all_chunks = [] if on_complete
# ...
all_chunks << chunk if on_complete

Questions:

  1. For large component outputs, could this cause memory issues?
  2. Should there be a size limit or warning?
  3. Is this only used for caching (which already buffers)?

Suggestion: Document that on_complete buffers all chunks in memory and should only be used for cacheable components.

4. Barrier Task Lifecycle

Question: In run_stream_inside_fiber, each component adds a task to @async_barrier:

@async_barrier.async do
  stream.each_chunk { |chunk| ... }
end

If a component's each_chunk hangs or takes a very long time, does this block @async_barrier.wait indefinitely?

Suggestion: Consider adding timeout configuration for component streaming tasks.

5. Test Assertion Change

From diff:

-        expect(chunks_read.count).to eq(1)
+        # NOTE: With async architecture, chunks are consumed in background immediately,
+        expect(chunks_read.count).to eq(3)

Question: Is this behavioral change intentional and documented?

  • Previously, chunks were lazily consumed (fiber.resume)
  • Now, all chunks are eagerly consumed in the background
  • Does this affect memory usage or timing in production?

Suggestion: Document this eager consumption behavior in the PR description or code comments.

6. Instance Variable Coupling

Observation: Components rely on instance variables set by the controller:

  • @async_barrier
  • @main_output_queue

Current check:

if @async_barrier.nil?
  raise ReactOnRails::Error, "You must call stream_view_containing_react_components..."
end

Suggestion: This is fine, but consider documenting the expected initialization order and lifecycle in code comments or module documentation.


🔒 Security Considerations

No Major Concerns

  • No SQL injection risks
  • No XSS vulnerabilities introduced
  • No authentication/authorization changes
  • Resource exhaustion is mitigated by LimitedQueue buffer size

Minor Note: If malicious components could create infinite streams, they would hang the request. Consider timeout mechanisms.


Performance Considerations

Positive Changes:

  • ✅ Unified queue reduces context switching
  • Async::Barrier is more efficient than manual fiber coordination
  • ✅ First chunk still delivered synchronously for fast initial render

Potential Concerns:

  1. Eager chunk consumption: All chunks now consumed immediately vs. lazy on-demand
  2. Memory: on_complete callback buffers all chunks for caching
  3. Queue size: LimitedQueue buffer size determines backpressure - ensure it's tuned correctly

📝 Documentation & Testing

Strengths:

  • ✅ Excellent PR description with architecture diagrams
  • ✅ Tests updated comprehensively
  • ✅ Clear comments in test files explaining async context setup

Suggestions:

  1. Add inline documentation in run_stream_inside_fiber explaining:

    • Why Async::Variable is used for first chunk
    • The lifecycle of @async_barrier and @main_output_queue
    • Thread-safety guarantees
  2. Document on_complete callback in method signature:

    # @param on_complete [Proc, nil] Optional callback invoked with all chunks when streaming completes.
    #                                 WARNING: Buffers all chunks in memory.
    def run_stream_inside_fiber(on_complete: nil)
  3. Add integration test for error scenarios:

    • Component raises exception during streaming
    • Component hangs indefinitely
    • Multiple components with mixed success/failure

🎯 Final Recommendations

Must Address Before Merge:

  1. ⚠️ Add explicit error handling in drain_streams_concurrently to ensure queue cleanup on failure
  2. ⚠️ Add test case for component streaming errors
  3. ⚠️ Document eager chunk consumption behavior change

Nice to Have:

  1. Add inline documentation for run_stream_inside_fiber
  2. Add timeout configuration for streaming tasks
  3. Add test case for empty stream behavior
  4. Document memory implications of on_complete callback

After Merge:

  1. Monitor production for any streaming errors or performance regressions
  2. Consider adding metrics for streaming task duration and queue depths

📊 Summary

Overall Assessment: ✅ Approve with Minor Changes

This is a well-thought-out refactoring that achieves its primary goal of enabling async code in streaming contexts. The code simplification and architectural improvements are significant positives.

The main concerns are around error handling and documenting behavioral changes. With the addition of explicit error handling and a test for failure scenarios, this would be ready to merge.

Great work on the clean async migration! 🎉


CI Status: ✅ All checks passing

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

This is an excellent architectural improvement that modernizes the concurrency model. The refactoring is well-motivated, thoroughly tested, and achieves significant code simplification.

Strengths

  1. Clear motivation: The PR excellently explains why this change is needed (async code in each_chunk)
  2. Code simplification: -40 lines total, especially the drain method (45→11 lines)
  3. Smart on_complete callback: Much cleaner than the wrapper fiber approach for caching
  4. Excellent test coverage: Tests properly use async primitives and validate behavior

Critical Issues to Fix Before Merge

1. Error Handling in stream_view_containing_react_components

Problem: render_to_string is called AFTER initializing @async_barrier but BEFORE begin/ensure. If it raises, async tasks may be orphaned.

Fix: Move render_to_string inside begin block and add barrier cleanup to ensure:

begin
  template_string = render_to_string(template: template, **render_options)
  response.stream.write(template_string)
  drain_streams_concurrently(parent_task)
ensure
  @async_barrier&.wait
  @main_output_queue&.close
  response.stream.close if close_stream_at_end
end

2. First Chunk Variable Race in run_stream_inside_fiber

Problem: If stream.each_chunk raises before setting first_chunk_var.value, the main thread will hang forever on first_chunk_var.wait

Fix: Add rescue block to ensure variable always resolves:

@async_barrier.async do
  begin
    # ... existing code ...
  rescue => e
    first_chunk_var.value = nil unless first_chunk_var.resolved?
    raise
  end
end

Recommendations

  1. Add error scenario tests - Test that stream.each_chunk exceptions don't cause hung requests
  2. Document empty stream behavior - Add comment explaining nil semantics
  3. Add inline comments - Explain Async::Barrier, Async::Variable, Async::LimitedQueue for unfamiliar developers

Security & Performance

  • No security issues identified
  • No performance regressions (same blocking behavior, same buffering for cache)
  • Positive improvements: simpler code, unified concurrency model

Verdict: ✅ Approve with Critical Fixes

Great refactoring! Just address the two error handling issues to prevent hung requests, then this is ready to merge.

🤖 Generated with Claude Code

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review: Replace manual Fibers with Async gem primitives

Hi @AbanoubGhadban! Overall this is a solid refactoring that successfully migrates from manual Fiber management to Async gem primitives.

✅ Strengths

  1. Clear Architecture Improvement - Enables running async gem code inside each_chunk with unified concurrency model and -40 lines of code
  2. Good Code Simplification - drain_streams_concurrently went from 45 lines to 11 lines
  3. Clean Callback API - on_complete callback pattern is elegant and explicit
  4. Comprehensive Test Updates - All tests properly updated to match new async architecture

🔍 Potential Issues

1. ⚠️ First Chunk Blocking Behavior

In run_stream_inside_fiber, you use Async::Variable with first_chunk_var.wait (blocking). How does this interact with Rails streaming? Does it prevent concurrent streaming with multiple components?

Recommendation: Add comment explaining why blocking here is safe/necessary.

2. ⚠️ Queue Enqueue Order

With multiple components streaming to the same @main_output_queue, is chunk ordering guaranteed across components? Will chunks interleave or stay grouped?

Recommendation: Document expected behavior and ordering guarantees.

3. 🔍 Error Handling

Error propagation is implicit through @async_barrier.wait. Will @main_output_queue.close still be called if an error occurs?

Recommendation: Consider begin/ensure to guarantee queue closure on errors.

4. ⚠️ Test Behavioral Change

The assertion changed from expect(chunks_read.count).to eq(1) to eq(3). This changes API semantics - chunks were lazy-evaluated, now immediately consumed in background.

Questions: Is this intentional? Could it affect callers expecting lazy evaluation? Impact memory usage for large streams?

Recommendation: If intentional, document clearly and consider CHANGELOG entry.

5. 🔍 Require Statement

require "async/variable" is inside the method, which is unconventional.

Recommendation: Move to top of file or document why loaded lazily.

🧪 Testing Recommendations

  1. Verify concurrent streaming with delayed Component A and normal Component B
  2. Test error propagation when one component fails
  3. Test empty stream case (no chunks yielded)
  4. Verify all_chunks only created when on_complete provided (memory efficiency)

🛡️ Security & Performance

✅ No security issues - primarily concurrency refactoring
📊 Consider benchmarking TTFB, total streaming time, and memory usage before/after

🎯 Summary

Merge Decision: ✅ Approve with minor changes

Must Address:

  1. Error handling for queue closure
  2. Document blocking behavior of first_chunk_var.wait
  3. Verify test behavioral change is intentional

Should Address:

  1. Move/document require statement
  2. Document chunk ordering guarantees
  3. Add error propagation and empty stream tests

Great work! Code is much cleaner and maintainable. 🚀

Generated with Claude Code

@claude
Copy link

claude bot commented Nov 23, 2025

Code Review - PR #2111: Replace manual Fibers with Async gem primitives

✅ Overall Assessment

Verdict: LGTM with minor observations

This is a well-architected refactoring that successfully replaces manual Fiber management with Async gem primitives. The code is cleaner, more maintainable, and critically enables async code execution within component streaming. All CI checks pass, and the test updates demonstrate thorough validation.


🎯 Strengths

1. Architectural Improvement

  • Unified concurrency model: Using Async gem primitives (Async::Barrier, Async::Variable, Async::LimitedQueue) provides a consistent approach
  • Future-proof: Enables running async gem code inside each_chunk, unblocking the async props implementation
  • Cleaner abstractions: The intent is much clearer than manual Fiber management

2. Code Simplification

  • stream.rb: Reduced from 77 lines to 43 lines (-34 lines, -44%)
  • Eliminated complex methods: Removed build_producer_tasks and build_writer_task
  • Single responsibility: drain_streams_concurrently now has one clear job

3. Excellent Test Coverage

  • Tests updated to use new async primitives
  • Proper use of around blocks to provide Sync context
  • Tests validate both synchronous first chunk delivery AND async subsequent chunks
  • Added RuboCop disable/enable for RSpec/InstanceVariable with clear justification

4. Smart Callback Design

The on_complete callback is elegant:

on_complete: lambda { |chunks|
  Rails.cache.write(view_cache_key, chunks, raw_options[:cache_options] || {})
}

This is much cleaner than the previous wrapper fiber approach.


⚠️ Observations & Considerations

1. Potential Race Condition (Minor)

In run_stream_inside_fiber (react_on_rails_pro_helper.rb:314-341):

@async_barrier.async do
  stream = yield
  is_first = true

  stream.each_chunk do |chunk|
    all_chunks << chunk if on_complete

    if is_first
      first_chunk_var.value = chunk  # ← Assignment
      is_first = false
    else
      @main_output_queue.enqueue(chunk)
    end
  end

  first_chunk_var.value = nil if is_first  # ← Handle empty stream
  on_complete&.call(all_chunks)
end

# Wait for first chunk (blocking)
first_chunk_var.wait  # ← Reading
first_chunk_var.value

Question: Is there a race where first_chunk_var.wait might return before the value is set, or is Async::Variable guaranteed to block until the first assignment?

Looking at typical Async::Variable semantics, .wait should block until a value is assigned, so this is likely safe. However, I'd recommend:

  • Adding a comment explaining the blocking semantics for future maintainers
  • Confirming that Async::Variable.new (uninitialized) + .wait correctly blocks

Suggested documentation addition:

# Create a variable to hold the first chunk for synchronous return
# Async::Variable.wait blocks until a value is assigned, ensuring
# synchronous delivery of the first chunk
first_chunk_var = Async::Variable.new

2. Empty Stream Handling

The code handles empty streams gracefully:

first_chunk_var.value = nil if is_first

Question: What happens when a component produces zero chunks? Does this return nil from stream_react_component, and is that the expected behavior?

Consider:

  • Adding a test case for empty stream scenario
  • Documenting expected behavior when streams have no chunks

3. Error Propagation

Potential issue: If an error occurs inside the @async_barrier.async block (e.g., during stream.each_chunk), what happens to the caller waiting on first_chunk_var.wait?

With manual Fibers, errors would surface immediately on fiber.resume. With async tasks, the error might propagate differently.

Recommendation:

  • Verify error handling with a test that triggers an error during streaming
  • Ensure exceptions in the async task don't leave the caller hanging indefinitely
  • Consider wrapping the async block with error handling that sets first_chunk_var.value to an error state

Example test to add:

it "propagates errors from streaming to caller" do
  expect {
    stream_react_component(component_name, props: props) do
      raise "Simulated error"
    end
  }.to raise_error(/Simulated error/)
end

4. Cache Hit Path Still Uses Old Approach

In handle_stream_cache_hit (react_on_rails_pro_helper.rb:248-255):

initial_result, *rest_chunks = cached_chunks

# Enqueue remaining chunks asynchronously
@async_barrier.async do
  rest_chunks.each { |chunk| @main_output_queue.enqueue(chunk) }
end

# Return first chunk directly
initial_result

This is good! Cache hits now use the same async primitives. However:

  • Consider extracting the enqueuing logic to reduce duplication between cache hit/miss paths
  • The cache hit path doesn't use Async::Variable since it already has the first chunk (correct optimization)

5. Test Expectations Changed

In react_on_rails_pro_helper_spec.rb:392:

# NOTE: With async architecture, chunks are consumed in background immediately,
expect(chunks_read.count).to eq(3)  # Was: eq(1)

This is a behavioral change worth documenting:

  • Old: Chunks were lazy (read on fiber.resume)
  • New: Chunks are eager (all consumed immediately into queue)

Impact: This could affect memory usage if streams produce many large chunks. The LimitedQueue buffer size mitigates this, but it's worth noting in the migration guide.

6. Thread Safety of all_chunks Array

all_chunks = [] if on_complete  # Only collect if callback provided
# ...
all_chunks << chunk if on_complete  # Collect for callback

Potential issue: If multiple async tasks run concurrently and share state, array mutations aren't thread-safe in Ruby.

However, in this case: Each run_stream_inside_fiber call creates its own local all_chunks, so there's no shared state between components. ✅ This is safe.


🐛 Potential Bugs

Bug 1: Missing require in stream.rb

The file adds:

require "async"
require "async/barrier"
require "async/limited_queue"

But run_stream_inside_fiber also uses Async::Variable:

require "async/variable"

Is this require statement in the right file? It's in react_on_rails_pro_helper.rb, but should it also be in stream.rb if stream.rb initializes the async context?

Actually, looking at the code flow:

  • stream.rb creates @async_barrier and @main_output_queue
  • react_on_rails_pro_helper.rb uses Async::Variable

So the require statements are correctly placed. ✅


📝 Documentation & Comments

Good:

  • PR description is excellent with clear "before/after" architecture
  • Code changes are well-commented in the diff
  • Test changes include helpful notes about behavioral differences

Could improve:

  • Add inline comments explaining Async::Variable.wait blocking semantics
  • Document the empty stream case
  • Add a comment in drain_streams_concurrently explaining the barrier → close → drain pattern

🧪 Testing Recommendations

Add these test cases (if not already covered):

  1. Empty stream handling:

    it "handles components that produce no chunks" do
      # Mock a stream that yields nothing
    end
  2. Error propagation during streaming:

    it "propagates errors from streaming to caller" do
      expect { stream_react_component(...) { raise "Error" } }.to raise_error
    end
  3. Concurrent component streaming:

    it "handles multiple components streaming concurrently" do
      # Ensure chunks from different components don't interfere
    end
  4. Backpressure with large streams:

    it "respects buffer size limits" do
      # Stream more chunks than buffer_size to verify LimitedQueue blocking
    end

🔒 Security Considerations

No security issues identified. The changes are purely internal concurrency refactoring.


⚡ Performance Considerations

Positive:

  • Async gem is well-optimized for concurrent I/O
  • LimitedQueue provides backpressure to prevent memory bloat
  • First chunk still delivered synchronously (no latency regression)

Neutral:

  • Eager chunk consumption vs lazy (trade memory for simplicity)
  • The buffer size (concurrent_component_streaming_buffer_size) is configurable, which is good

Recommendation:

  • Monitor memory usage in production with large streams
  • Consider documenting recommended buffer sizes for different use cases

📋 Checklist from PR Description

  • Run existing streaming integration tests ✅ (CI passed)
  • Verify concurrent streaming still works ✅ (tests updated)
  • Test with multiple components ✅ (implied by existing tests)
  • Test error handling ⚠️ (recommend adding explicit error test)

🎓 Learning & Best Practices

Excellent patterns demonstrated:

  1. Replacing low-level primitives (Fibers) with higher-level abstractions (Async gem)
  2. Using callbacks (on_complete) for cross-cutting concerns (caching)
  3. Comprehensive test updates that validate new behavior
  4. Clear PR documentation explaining architectural changes

🚦 Final Recommendation

APPROVE with suggestions for follow-up:

Before merge (optional but recommended):

  • Add test case for error propagation during streaming
  • Add test case for empty stream handling
  • Add inline comment explaining Async::Variable.wait blocking semantics

After merge (nice-to-have):

  • Monitor production memory usage with new eager chunk consumption
  • Update Pro package docs with behavioral changes (eager vs lazy)
  • Consider extracting chunk enqueuing logic to reduce duplication

Great work on this refactoring! The code is significantly cleaner and more maintainable. 🎉


Reviewed using Claude Code

def run_stream_inside_fiber
if @rorp_rendering_fibers.nil?
def run_stream_inside_fiber(on_complete:)
require "async/variable"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to do it only here and not at the beginning of the file?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants